<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Ontological Considerations for Uncertainty Propagation in High Level Information Fusion</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Mark Locher</string-name>
          <email>mlocher@gmu.edu</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>George Mason University and SRA</institution>
          ,
          <addr-line>International Fairfax VA</addr-line>
          <country country="US">USA</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Paulo C. G. Costa George Mason University Fairfax VA</institution>
          <country country="US">USA</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>- Uncertainty propagation in a level 2 high level information fusion (HLIF) process is affected by a number of considerations. These include the varying complexities of the various types of level 2 HLIF. Five different types are identified, ranging from simple entity attribute refinement using situation status data to the development of a complete situation assessment assembled from applicable situational fragment data. Additional considerations include uncertainty handling in the input data, uncertainty representation, the effects of the reasoning technique used in the fusion process, and output considerations. Input data considerations include the data's relevance to the situation, its credibility, and its force or weight. Uncertainty representation concerns follow the uncertainty ontology developed by the W3C Incubator Group on Uncertainty Reasoning. For uncertainty effects of the fusion process, a basic fusion process model is presented, showing the impacts of uncertainty in four areas. Finally, for output uncertainty, the significance of a closed-world versus open-world assumption is discussed.</p>
      </abstract>
      <kwd-group>
        <kwd>- High level fusion</kwd>
        <kwd>input uncertainty</kwd>
        <kwd>process uncertainty</kwd>
        <kwd>output uncertainty</kwd>
        <kwd>uncertainty representation</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>INTRODUCTION</p>
      <p>
        The past 20 years have seen an explosion of systems and
techniques for collecting, storing and managing large and
diverse sets of data of interest to a number of communities.
These data are collected by a wide variety of mechanisms, each
of which has varying considerations that influence the
uncertainty in the data. In order to provide useful information
for a particular question or problem, the relevant data
(“evidence”) must be identified, extracted and then fused to
provide insight or answers to the question or problem. The
information fusion community has developed a widely
accepted functional layered model of information fusion.
These layers can be divided into low level and high-level
fusion. At all levels, the data going into a fusion process is
recognized as having uncertainty, which affects in various
ways the degree of certainty in the output of the process.
Lowlevel fusion has been widely explored, primarily through the
radar tracking community, and issues of uncertainty
determination and propagation are well understood [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
      </p>
      <p>
        High-level fusion, on the other hand, requires reasoning
about complex situations, with a diversity of entities and
various relationships within and between those entities. This
reasoning is often expressed symbolically, using logic-based
approaches [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. There has been significant work in using
ontological approaches in developing fusion techniques and
some of these approaches have taken uncertainty
considerations into account (e.g. [3] [
        <xref ref-type="bibr" rid="ref3">4</xref>
        ] [
        <xref ref-type="bibr" rid="ref4">5</xref>
        ] [
        <xref ref-type="bibr" rid="ref5">6</xref>
        ]). Various
techniques exist to model and propagate uncertainty in a fusion
process, with varying strengths and difficulties. This suggests
that their relative performance in a fusion system should vary
significantly depending on the types and nature of the
uncertainties within both the input data and the context of the
problem set modeled with the fusion system. Unfortunately,
there is no consensus within the fusion community on how to
evaluate the relative effectiveness of each technique. Work in
this area will be hampered until the evaluation question is at
least better defined, if not resolved.
      </p>
      <p>
        The International Society for Information Fusion (ISIF)
chartered the Evaluation of Technologies for Uncertainty
Reasoning Working Group (ETURWG) to provide a forum to
collectively address this common need in the ISIF community,
coordinate with researchers in the area, and evaluate techniques
for assessing, managing, and reducing uncertainty [7]. In its
first year, ETURWG defined its scope and developed the
uncertainty representation and reasoning evaluation framework
(URREF) ontology. The URREF ontology aims to provide
guidance for defining the actual concepts and criteria that
together comprise the comprehensive uncertainty evaluation
framework [
        <xref ref-type="bibr" rid="ref7">8</xref>
        ]. It is evident that part of the issue in evaluating
different uncertainty representation systems is to properly
understand how a high-level fusion process works and how
uncertainty is propagated through the process.
      </p>
      <p>This paper aims to help establish the various considerations
about how uncertainty affects a HLIF process. It will begin by
defining what is meant by a HLIF process, and then focus on
one class of HLIF, the level 2 HLIF. From there, it will define
a taxonomy of Level 2 HLIF, where increasing complexity of
level 2 HLIF types have additional uncertainty considerations.
Then it explores uncertainty propagation issues associated with
uncertainty in the input data, the uncertainty effects of both the
fusion reasoning process and the representation scheme, and
the output uncertainty. It concludes with a top-level discussion
of an overall mathematical approach applicable to these
considerations.</p>
      <p>II.</p>
    </sec>
    <sec id="sec-2">
      <title>DEFINITION OF HIGH-LEVEL FUSION</title>
      <p>
        A widely accepted definition of High-Level Information
Fusion (HLIF) is that it refers to the fusion processes classified
as level 2 and above within the revised Joint Directors of
Laboratories data fusion model. This model establishes five
functional levels, as defined in [
        <xref ref-type="bibr" rid="ref8">9</xref>
        ] and repeated in Table 1
below.
      </p>
      <p>
        A key item is that these assessments are not just a combination
of information, but they are also analytic judgments. For
example, a level 2 fusion process is more than a unified display
of information (e.g. a common operational picture); rather, it
requires explicit statements about how certain specific elements
of reality are structured, in order to address specific questions
that a user of that process wants answered. Level 2 fusion
essentially answers the question “what is going on?” Level 3
fusion addresses “what happens if …?”, where “if” is followed
by a possible action or activity (level 3 is often predictive).
Level 4 involves steering the fusion system, including adjusting
data collection based on an assessment of already-collected
data. There has been some discussion regarding the boundary
between level 1 and level 2. Das, for instance, considers
identification and object classification as beyond level 1,
suggesting that this type of fusion should be a level 1+ [
        <xref ref-type="bibr" rid="ref9">10</xref>
        ].
Steinberg, on the other hand, considers this to be clearly level 1
[
        <xref ref-type="bibr" rid="ref8">9</xref>
        ]. Sowa’s ontological categories provide insight into this
question, and can be used to illuminate some factors on
uncertainty propagations considerations. In the present work,
these ontological categories were used as a basis for defining a
taxonomy of level 2 fusion.
      </p>
      <p>III.</p>
      <p>TAXONOMY OF LEVEL 2 HLIF</p>
      <p>
        Sowa defined twelve ontological categories, and together
they comprise a very attractive framework for analyzing fusion
processes at level 2. He suggests that one way of categorizing
entities in the world is to consider them from three orthogonal
aspects [
        <xref ref-type="bibr" rid="ref10">11</xref>
        ]. The first is whether they are physically existing
or abstract. Abstract entities are those that have information
content only, without a physical structure. This includes the
idea of geometric forms or canonical structures (e.g. idea of a
circle), or entities like computer program source code.
      </p>
      <p>
        The second aspect defining the ontological categorization is
whether the entity is a continuant (i.e., having time-stable
recognizable characteristics) or an occurrent (i.e., significantly
changing over time). This means that an entity can either be an
object (a continuant) or a process (an occurrent – also called an
event). The third and final aspect of his ontological
categorization is the degree of interrelatedness with other
objects and processes. At the independent level, an entity is
considered by itself, without reference to other entities. At the
relative level, an entity is considered in single relation to
another entity. Finally, the idea of mediating takes into account
two items: the number and complexity of the various
interrelationships among the entities, and the unifying idea – its
purpose or reason – that allows one to define a situation or a
structure that encompasses the relevant entities [
        <xref ref-type="bibr" rid="ref10">11</xref>
        ].
      </p>
      <p>The combination of these three aspects results in the 12
ontological categories shown in Table 2. Table 3 provides a
more detailed definition of each ontological category and
provides some examples.</p>
      <p>A key point in looking at this ontological categorization is
that one must understand the context and viewpoint from which
a given entity is categorized, and that changes to either of these
two might result in different categorizations for the same entity.
To illustrate this point, an airplane can be considered as either
an independent object flying in the air, or a complex mediating
structure with thousands of component objects and processes
that work together for the purpose of achieving aerial flight.
The viewpoint one takes depends on the context one is
interested in. In the airplane example, it depends on whether
one is tracking a particular aircraft using a variety of sensors, or
attempting to determine the various capabilities of a new
aircraft type.</p>
      <p>Independent
Relative
Mediating</p>
      <p>It is tempting to suggest that Sowa’s three relationship
levels correspond to the JDL levels 1 / 2 / 3 (i.e., Independent,
Relative, and Mediating, respectively). However, this has at
least three major problems. First, Sowa’s relative level is
focused on a single relationship between two entities, while
JDL level 2 can (but does not have to) consider multiple
relationships in and between multiple entities. Second, JDL
level 2 situation assessment includes making assessments about
the purpose or reason for the situation. This reason or purpose
is the key characteristic that distinguishes one situation from
another. A raucous sports team victory celebration, a protest
and a riot share many entities and relationships, but
understanding the reason/purpose behind it can make a
significant difference to a chief of police. Third, there are level
1 inferences that depend on the existence of fixed relationships
between entities.</p>
      <p>
        To illustrate the latter point above, consider the case of an
intercepted radar signal that has been classified as having come
from a specific type of radar system. Now let us suppose that
the radar type is tightly associated with a larger system, such as
the AN/APG-63 radar on older versions of the US F-15 aircraft
[
        <xref ref-type="bibr" rid="ref11">12</xref>
        ]. If one has detected the APG-63 radar, one also has very
high confidence that one has detected an F-15 aircraft. This
F15 object identification occurs because there is a fixed
relationship between the two objects (it’s not a 100%
relationship, as the APG-63 is also installed on fourteen United
States Customs and Border Protection aircraft [
        <xref ref-type="bibr" rid="ref12">13</xref>
        ]). This
situation is a clear example of a fixed relationship between
entities (i.e., AN/APG-63 used in F-15 fighters) that supports a
level 1 object identification, thus making it applicable to
directly associate JDL level 1 to Sowa’s Independent
relationship.
      </p>
      <p>Schema Circle, language
concepts for classes of
objects (e.g. cat,
airplane)
Script The time or time-like Process instructions,
sequence of an occurrent software source code,
radar track file
Juncture Time-stable relationship Joint between two
between two objects bones, connection
between parts of a car
Participation Time-varying relationship Artillery firing a shell,
between two objects, or a radio communication
process related to an object between two people</p>
      <p>
        Now consider the case where the radar is associated with a
Surface-to-Air Missile (SAM) system, such as the Tin Shield
acquisition radar and the SA-10 Grumble SAM system. The
SA-10 system consists of multiple separate vehicles, not a
single vehicle. The radar vehicle is physically separate from the
other vehicles. It is possible for the Tin Shield radar to be used
as a stand-alone search radar [
        <xref ref-type="bibr" rid="ref13">14</xref>
        ]. In this case, detection of the
Tin Shield radar signal may indicate the presence of the SA-10,
but it may not.
      </p>
      <p>A key differentiator between JDL levels 1 and 2 is the focus
on an object versus on multiple objects in relationship to each
other. Yet, as illustrated by the two later examples, a JDL level
1 assessment can use techniques that are grounded in Sowa’s
relative level. In general, determining an object’s level 1
attributes and states often depends on fusing different sensor
outputs of processes that an object has undergone – thus
making use of participation level information.</p>
      <p>Using Sowa’s categories, one can create the taxonomy of
level 2 situations shown in Figure 1. This taxonomy ranges
widely in complexity and analytic inferences required. There
are five cases presented in the Figure, each created by first
determining whether one is dealing with a known situation, or
whether the situation itself must be inferred. In general, the
least complex case is for known situations where one is
determining / refining the attribute of an entity. This case
straddles the level 1 / 2 line. It is object / process identification
where the relationship between elements within the object of
interest may vary. An example is the radar / vehicle case above.
The defined situation is that a Tin Shield radar has been
detected at a particular location. The question is whether an
SA-10 battery (a higher level object) is at that location, or
whether the radar is operating in a stand-alone mode (whether
operationally, for system testing, or for system maintenance).
The inferences generally are based on schema-based evidential
reasoning (e.g. “there is a 95% chance that this radar will be
associated with an SA-10 battery in its immediate vicinity”).
The second case is a step up in complexity, where the situation
is well defined but the objective is to identify a specific object
of interest within the situation. For example, one might have
very credible evidence that a terrorist group will attempt to
smuggle a radiological bomb into the United States via a
freighter. In this case, the situation itself is known (one knows
the purpose / intention), but the actors may be hidden.
Inferring which freighter (an object identification) is a likely
carrier of the bomb is the question of interest. Another
example would be to determine who committed a robbery of a
bank, when one has a video of the act itself (the situation is a
robbery). In this case, the evidence is extracted from a variety
of sources, which can be classified as being junctures,
participations, histories or descriptions.</p>
      <p>The inferential process generally becomes more complex
when the specific situation itself is not known, but must be
inferred. The taxonomy outlines three such cases, each with an
increasing level of complexity. The first is when the specific
situation is not known, but there is a set of well-defined
situation choices to select from. This case is a situation version
of a state transition. A classic example is the military
indications and warning question, which can be raised when an
increase in activity at military locations in a country is detected.
The question then becomes “what is the purpose of the
activity?” Four major choices exist: a major military exercise,
suppression of domestic unrest, a coup d' etat, or preparing to
go to war. Each is a relatively well-defined situation with
known entities, attributes and relationships. The selection
among them becomes a pattern-matching exercise.</p>
      <p>The next level of complexity occurs when not only is the
situation itself unknown, the situation itself must be developed.
Unlike the case above, the issue now is not choosing among a
set of possible situations but to build the situation from the
data. This case can be divided into two subcases. In the first
subcase, one has a series of templates that can be used in
developing aspects of the situation. For example, in developing
an enemy order of battle for a standing nation-state’s military,
one has a basic understanding of the objects and relationships
that constitute a modern military force. A country may not have
all of the elements, and the organizational structure will vary.
Yet, it is very likely that the structure and deployment will
follow patterns similar to those used by other countries.</p>
      <p>The second subcase is the most complex situation. Here,
one must develop a situation where the basic purpose itself
must be determined. For example, consider the case when a
government agency is notified that something is significantly
amiss, with enough information to spark interest, but not
enough to understand what is happening. In that case, the
evidence must be assembled without a common template to
guide the fusion. Rather, the evidence must be fused using
fragmentary templates, that themselves must be integrated to
provide the overall situation. Integrating the data to “connect
the dots” that could have predicted the September 11, 2001
commercial airliner strikes on the World Trade Center and the
Pentagon falls into this category. Note also that this case also
straddles the level 2 / level 3 fusion line, since determining the
purpose in this case has a predictive element with possible
courses of actions and outcomes.</p>
      <p>IV.</p>
      <p>UNCERTAINTY PROPAGATION IN HLIF</p>
      <p>
        In any fusion process, one follows a fundamental reasoning
process, which logically uses a series of reasoning steps, often
of an “if, then” form. Beginning with a set of events, we form
a chain of reasoning to come to one or more conclusions.
Figure 2a models a simple case, while Figure 2b gives an
example of that case. More complex structures can be easily
created [
        <xref ref-type="bibr" rid="ref14">15</xref>
        ].
      </p>
      <p>
        The ETURWG found that within this fundamental process
there were at least four areas for uncertainty considerations: the
uncertainty in the input data, the uncertainty associated with
representation within the fusion system, the uncertainty effects
of the reasoning process, and the resultant uncertainty in the
outputs of the process [
        <xref ref-type="bibr" rid="ref7">7, 8</xref>
        ]. The subsections below address
some of the ontological considerations associated with the first
three factors. Issues associated with output uncertainty are
treated in section V.
      </p>
      <sec id="sec-2-1">
        <title>A. Uncertainty in the Input Data</title>
        <p>
          All conclusions are ultimately grounded on evidence,
drawn from a variety of data sources. But often evidence is
“inconclusive, ambiguous, incomplete, unreliable, and
dissonant.” Any conclusions drawn from a body of evidence
is necessarily uncertain. Schum [
          <xref ref-type="bibr" rid="ref14">15</xref>
          ] found that one must
establish the credentials of any evidence used in a reasoning
process. These credentials are its relevance to the question /
issue at hand, its credibility, and its weight or force [
          <xref ref-type="bibr" rid="ref15">16</xref>
          ]. This
suggests that one should elaborate on the fundamental
reasoning process from Figure 2 with the additional items
shown in Figure 3.
        </p>
        <p>Intermediate  Reasoning  Steps</p>
        <p>Conclusion
Reasoning    Step  N
Reasoning  Step  1</p>
        <p>Event  E</p>
        <p>A</p>
        <p>SAM  System  will  engage  a  </p>
        <p>target
If  SAM  radar  is  active,  then  
SAM  is  preparing  to  engage
SAM  Systems  are  located  </p>
        <p>with  the  radar
SAM  Radar  active  
at  Location  X</p>
        <p>B</p>
        <p>Data becomes evidence only when it is relevant. Relevance
assesses whether the evidence at hand is germane to the
question(s) being considered. Irrelevant information makes no
contribution to the conclusion drawn, and potentially confuses
the fusion process by introducing extra noise. Evidence can be
either positively (supportive) or negatively (disconfirmatory)
relevant to a particular hypothesis. Any analytic effort is
obliged to seek and evaluate all relevant data.</p>
        <p>
          Once data is shown to be relevant to a particular problem
(i.e., it becomes evidence), Schum points out that there is an
important but often overlooked distinction between an event
(an object, process, juncture or participation in Sowa’s
ontological categories) and the evidence about that event or
state. That is, Joe’s statement “I saw Bob hit Bill with a club”
does not mean that such event actually happened, and should
be seen only as evidence about it. Credibility establishes how
believable a piece of evidence is about the event it reports on.
Schum identified three elements of credibility [
          <xref ref-type="bibr" rid="ref16">17</xref>
          ]; the
ETURWG added self-report as a distinct element (see Table 4
for elements and definitions) [7].
Observational Sensitivity: Source has the ability to actually observe what
it reports (e.g. Observer actually has the visual acuity needed to see what
was going on, or an electronic intercept was of such low quality the
operator guessed part of the conversation)
Self-Report: Source provides a measure of its certainty in its report (e.g. a
human source hedges her report with “it’s possible that…” or a sensor
reports that detection was done at a signal to noise ratio of 4)
        </p>
        <p>
          The force (or weight) of the event establishes how
important the existence of that event is to the conclusion one is
trying to establish. By itself, the event “Bob hit Bill with a
club” would have a significant force in establishing a
conclusion that Bill was seriously injured. It would have less
force in establishing that Bill was committing a violent act and
needed to be stopped at Bill, and even less force in concluding
that Bob was angry at Bill. Figure 3 shows that credibility can
have an effect on the force of an event on the conclusion. For
example, if the credibility of Joe’s testimony about Bob hitting
Bill with a club is low, the certainty of a conclusion that Bob’s
hitting was the cause of Bill’s injuries would be less than if Joe
testimony’s credibility was high. Schum investigated a number
of different ways in which considerations about data credibility
could affect the overall conclusions. One of his most interesting
findings is that, under certain circumstances, having credible
data on the credibility of a data source can have a more
significant force on the conclusion than the force of the event
reported in the data [
          <xref ref-type="bibr" rid="ref14">15</xref>
          ].
        </p>
      </sec>
      <sec id="sec-2-2">
        <title>B. Uncertainty in the Representation</title>
        <p>
          Uncertainty varies in its forms and manifestations. Therefore,
the uncertainty representation scheme used has an effect on
what can or cannot be expressed. To see this, one first needs to
have an understanding on the different types of uncertainty.
The W3C Incubator Group exploring uncertainty reasoning
issues for the World Wide Web developed an initial ontology
of uncertainty concepts, shown in Figure 4 [
          <xref ref-type="bibr" rid="ref17">18</xref>
          ].
   
        </p>
        <p>A Sentence is a logical expression in some language that
evaluates to a truth-value (formula, axiom, assertion). For our
purposes, information will be presented in the form of
sentences. The World is the context / situation about which the
Sentence is said. The Agent represents the entity making the
Sentence (human, computer etc.). Uncertainty is associated
with each sentence, and has four categories. Three of those are
described in Table 5, along with their significance for
uncertainty propagation in a HLIF process.
Inconsistency: no world can satisfy the statement.</p>
        <p>Significance - Data is contradictory; must resolve source of
contradiction (Can occur when deception is used)</p>
        <p>The last category in the ontology is Uncertainty Model,
capturing the various approaches that can be used to model
uncertainty in a reasoning process. These include (but are not
limited to):
•
•
•
•
•
•
•
•</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Bayesian Probability Theory</title>
      <p>Dempster-Shaffer Evidence Theory
Possibility Theory
Imprecise Probability approaches
Random Set Theory
Fuzzy Theory / Rough Sets
Interval Theory</p>
      <p>Uncertainty Factors</p>
      <p>
        A critical item in uncertainty propagation is the proper fit
between the types of uncertainty in the input data and in the
model(s) used in the fusion reasoning process. Failure to
account for all of the uncertainty types in the input data can
result in an erroneous process output. A classic survey of
uncertainty models, with a discussion on applicable uncertainty
types, is given in [
        <xref ref-type="bibr" rid="ref18">19</xref>
        ], with a recent review of the
state-of-theart in [
        <xref ref-type="bibr" rid="ref19">20</xref>
        ]
      </p>
      <sec id="sec-3-1">
        <title>C. Uncertainty in the HLIF Fusion Process</title>
        <p>To explore the ontological considerations of the uncertainty
propagation in a HLIF fusion process, we need to have a basic
fusion process model. We will concentrate on the level 2 fusion
process only, and leave out significant detail on the processes
at the other levels. Figure 5 shows this model. The first thing to
observe is that the raw data can come in at any level, as
evidenced by the incoming arrows at the right side of the
figure. The model does not require that all data be signal or
feature (Level 0) data, which is then aggregated into
higherlevel conclusions. For instance, object identification data (level
1) could come from an on-scene observer or from an image
analyst reporting on an image. Communications intercepts or
human reporting could provide evidence on relationships (level
2) or future intentions (level 3). Note that if a level 3 fusion
process is active, its outputs could affect the level 2 process in
two places. It can either be a controlling variable in the fusion
process itself, or it can affect the interpretation and extraction
of evidence. However, a level 3 process will have an effect
only if it has separate evidence that is not being used in the
level 2 fusion process (otherwise one has circular reporting).</p>
        <p>
          There are four basic processes in this model. The first is the
fusion process itself, which is usually some form of a
modelbased process. These models most often take the form of
Bayesian networks [
          <xref ref-type="bibr" rid="ref20 ref21 ref9">10, 21, 22</xref>
          ], although alternative
approaches have been proposed using graphical belief models
[
          <xref ref-type="bibr" rid="ref22">23</xref>
          ] and general purpose graphical modeling using a variety of
uncertainty techniques [
          <xref ref-type="bibr" rid="ref13">14</xref>
          ].
        </p>
        <p>Another important aspect of this model that must be
emphasized is that not all of the evidence that goes into the
model-based process is (or is assumed to be) in an immediately
usable form. Some data must have the appropriate evidence
extracted from it. This is where the uncertainty considerations
associated with representation within the fusion system come
into play. For example, the raw level 2 data may be a series of
people association data, which must be combined into a social
network analysis to reveal the full extent of the relationships.</p>
        <p>Level  3  Fusion  </p>
        <p>Level  3  Data
Level  2
Conclusions</p>
        <p>Level  2  
Evidence  
Extraction</p>
        <p>Fusion
Data  Alignment</p>
        <p>……
Object  A  
Level  1  </p>
        <p>Fusion
Level  0  Data</p>
        <p>Data
Store
Object  N  
Level  1  </p>
        <p>Fusion
Level  0  Data</p>
        <p>Level  2  Data
Level  1  Data</p>
        <p>Another example may be that one is interested in whether
two ships met and transferred cargo in the open ocean.
Suppose that you have a track file on each ship which has long
revisit rates between collections. This does not provide an
obvious indication that the ships met and stopped for a while.
But the track files show that both ships were on tracks that did
put them at a common location at a given period, and that the
average speed dropped significantly during the time a meeting
could have occurred (implying that the ships may have stopped
for a while). Given this data, one could conclude with some
level of certainty that they did meet and stopped to transfer
something. This level of certainty is driven by at least two
factors: the quality of the track file data (establishing how
certain one is in concluding that the tracks allowed them to
meet), and how likely is it that two ships showing these track
characteristics actually would have met and stopped.</p>
        <p>A significant part of the evidence extraction process could
be comparison to historical or reference data. For example, a
vehicle may be moving outside of a normal shipping lane /
airway or off-road. This requires a reference to a map base.
For this reason, the process model includes a data store, for
both reference information and for previous data.</p>
        <p>The last part of the model is a data alignment process.
Data may come in with different reference bases, and need to
be aligned to a common baseline in order to be used in the
extraction and fusion processes.</p>
        <p>Finally, note that the level 2 process includes the possibility
of a direct use of level 0 data. An area of active research is the
multi-source integration of level 0 data that is not of sufficient
quality, or that does not have enough quantity to allow a high
quality single-source conclusion.</p>
        <p>
          Several authors have developed mathematical constructs for
use in assessing the uncertainty of a situation assessment [
          <xref ref-type="bibr" rid="ref2 ref24">2,
25</xref>
          ]. Our model is a version of the one put forth by Karlsson
[
          <xref ref-type="bibr" rid="ref25">26</xref>
          ], modified using the terminology put forth by Franconi
[
          <xref ref-type="bibr" rid="ref26">27</xref>
          ]. Karlsson’s version focuses only on relationships, and
does not explicitly include predicates and attributes. While one
can model predicates and attributes using relationships, it is
cleaner to separate the entity space from the attribute space. In
addition, the construct formed in this paper acknowledges level
2 HLIF as explicitly including entity attributes as well as
relationships between entities. Including attributes as separate
from entity relationships, rather than defining relationships to
include attribute states makes this clearer. Per [
          <xref ref-type="bibr" rid="ref26">27</xref>
          ], the
language consists of:
        </p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>En, the 1-ary predicates</title>
    </sec>
    <sec id="sec-5">
      <title>Ak, the attributes (stated as 2-ary predicates)</title>
    </sec>
    <sec id="sec-6">
      <title>Rp, n-ary predicates for all relationships</title>
      <p>There is an interpretation function I = 〈D,  .I〉  where  domain
D is a non-empty set = Ω ⋃ B, Ω is the set of all entities, B is
the set of all attribute values and Ω ∩ B = ∅. Then</p>
      <p>EiI ⊆  Ω
AiI ⊆  Ω X B
RiI⊆ Ω X Ω X… X Ω = Ωn
xi are the specific instances and xi ∈   Ω</p>
      <p>We can make at least three uncertainty assessments. For
any specific entity tuple (x1,…, xn), we have a level of
uncertainty as to whether that tuple is a member of a specific
relationship. For a generic uncertainty measure uT, the basic
equation for whether a tuple is correctly associated with a
defined relationship is</p>
      <p>uTj((x1,…, xn)j ∈ Rj | EB, S, I)
where EB is the body of evidence used in making the
assignment, and S, I are any already known situation or impact
states. A similar equation holds for attribute uncertainty.</p>
      <p>We can also have uncertainty as to whether a relationship
that we see in the data is the relationship of interest. Given a
set of k possible relationship and a body of evidence EB for a
particular relationship Rcurrent, we can assess the following
uncertainty:
(1)
(2)
(3)
(4)
uRk((Rcurrent = Rk | EB, S, I)</p>
      <p>Again, a similar uncertainty equation holds for attribute
uncertainty. Situation assessment depends on the relationships
in the situation. A situation then can be defined as</p>
      <p>S ≝ (R1, …., Rk, A1,….An)
us(Scurrent = Sm | EB, I)</p>
      <p>Finally, we have an uncertainty measure us. Given a set of
m possible situations and a body of evidence EB for a particular
relationship Scurrent, we can assess the following uncertainty:</p>
      <p>In addition to uncertainties in the evidence and in the
reasoning process, equation (4) also allows us to account for
uncertainties in the situation definition. Equation 3 implies that
every situation can be precisely defined as a set of specific
relationships and attributes. But what if a relationship or
attribute is missing in a particular situation instance? For
example, a canonical birthday celebration in the United States
includes a cake with a number of lit candles on it. If there are
no candles on the cake, does this mean it is not a birthday
celebration?</p>
      <sec id="sec-6-1">
        <title>B. Application to Situation Assessment Taxonomy</title>
        <p>We can use this model to better understand the varying
complexities of the different situation assessment cases given
in section 3. For the simplest case, entity attribute refinement,
we see that we have a very simple situation (“emitter
operational in the environment”). From the existence of one
object (the Tin Shield radar), we are inferring the existence of a
second object (the SA-10 SAM system). This is a binary
relation, based on a Sowa Juncture (x1, x2). With this binary
relation, we are operating with a single instance of equation (1).
We only have the uncertainty measure for “Tin Shield” and
“SA-10” to be in juncture. For the second case, entity selection,
we again have a defined situation, but now are seeking a
specific object within multiple choices of objects. We are
operating at the level of equation (2) – we are seeking a
specific relation that ship i is the ship of interest. Based on the
evidence, we will create multiple tuples for the different
relationships that could lead us to the ship (using equation (1))
and then combine the results to get to equation (2).</p>
        <p>For the third case, structure / situation selection, we invoke
equation (4) as the basic equation. We are choosing between
multiple choices as to what the situation is. We use equation (1)
to determine if various relationships exist, and based on those
findings, determine which situation model is the correct one for
this body of evidence. For the fourth case, structure / situation
refinement, we again use equations (1) and (4). But we also use
equation (2) to determine what the exact set of relationships is.
Case 4 differs from case 3 in that we are trying to determine
what the relationships are that are appropriate for this situation
(or structure).</p>
        <p>For the fifth case, structure / situation creation, we have all
of the uncertainties addressed above, and we add an uncertainty
not immediately obvious in the generic equations. Relook
equation (4). One of the stated requirements is that we are
selecting among a set of defined situations. This essentially is a
closed world assumption. However, in case 5 we are building
the situation, rather than determining which situation among a
choice of situations is the applicable one. We still have a
number of models to choose from, but they are more
fragmentary than in previous cases. The previous cases
represent more of a “pieces of the puzzle” approach, where one
is assembling the puzzle according to one or more available
pictures to help guide you. Case 5 represents the case where we
one is assembling the puzzle without a picture or set of pictures
to guide one. Rather, you are assembling the puzzle guided by
basic puzzle rules about matching shapes and picture colors.
So, in case 5, we are also determining what the applicable Sks
are.</p>
        <p>VI.</p>
      </sec>
    </sec>
    <sec id="sec-7">
      <title>DISCUSSION</title>
      <p>Up to this point we have been able to attest the existence of
a number of uncertainty propagation considerations when
analyzing a level 2 HLIF. Most of these are not necessarily
obvious at a first glance, which suggests the importance of a
framework that supports the analytical process. The
framework proposed in this paper is meant for supporting the
analysis of processes occurring at JDL fusion level 2, and an
important aspect of it is the ability to correlate such processes
with the uncertainty considerations raised so far. Figure 6
summarizes these considerations as they relate to the heart of
the basic process model shown in Figure 5.</p>
      <p>
        The taxonomy of level 2 HLIF types discussed in section 2
defines the complexity of the uncertainty considerations that
must be accounted for. Five different types are identified,
ranging from simple entity attribute refinement using situation
status data to the development of a complete situation
assessment assembled from applicable situational fragment
data. The uncertainty in the input data / evidence must be
assessed for relevance, credibility, and force / weight, per the
ontology of evidence presented in Laskey et al. [
        <xref ref-type="bibr" rid="ref16">17</xref>
        ]. The
representation uncertainties that drive the modeling
methodologies can be classified per the uncertainty ontology
developed by the W3C Incubator Group for Uncertainty
Reasoning [
        <xref ref-type="bibr" rid="ref17">18</xref>
        ]. A variety of different models can be used to
properly capture the aspects of uncertainty in the data [
        <xref ref-type="bibr" rid="ref18 ref19">19, 20</xref>
        ].
Finally, the output uncertainty strongly depends on the a priori
identification of possible situation choices, or upon having a
fusion process that allows for an effective open world
assumption. These uncertainty considerations are the beginning
of understanding how to evaluate the effectiveness of various
uncertainty management methods in high-level fusion.
      </p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>D. L.</given-names>
            <surname>Hall</surname>
          </string-name>
          , J. Llinas,
          <article-title>“Multi-Sensor Data Fusion” in Handbook of Multisensor Data Fusion: Theory and Practice (2nd ed)</article-title>
          , CRC Press, pp
          <fpage>1</fpage>
          -
          <lpage>14</lpage>
          ,
          <year>2009</year>
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>D. A.</given-names>
            <surname>Lambert</surname>
          </string-name>
          , “
          <article-title>A Blueprint for Higher Level Fusion Systems”</article-title>
          , in Information Fusion, pp
          <fpage>6</fpage>
          -
          <lpage>24</lpage>
          , Elsevier, Vol
          <volume>10</volume>
          ,
          <year>2009</year>
          [3]
          <string-name>
            <surname>M. M. Kokar</surname>
            ,
            <given-names>C. J.</given-names>
          </string-name>
          <string-name>
            <surname>Matheus</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          <string-name>
            <surname>Baclawski</surname>
            ,
            <given-names>J. A.</given-names>
          </string-name>
          <string-name>
            <surname>Letkowski</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Hinman</surname>
          </string-name>
          , J. Salerno, “
          <article-title>Use Cases for Ontologies in Information Fusion”</article-title>
          <source>Proceedings of the 7th International Conference on Information Fusion</source>
          (
          <year>2004</year>
          ), retrieved from http://vistology.com/papers/Fusion04- UseCases.
          <source>pdf on 1 Jun</source>
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>E. G</given-names>
            <surname>Little</surname>
          </string-name>
          , G. L Rogova,
          <article-title>“Designing Ontologies for Higher Level Fusion”</article-title>
          ,
          <source>Information Fusion</source>
          , pp
          <fpage>70</fpage>
          -
          <lpage>82</lpage>
          , Elsevier, Vol
          <volume>10</volume>
          ,
          <year>2009</year>
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>P. C. G.</given-names>
            <surname>Costa</surname>
          </string-name>
          (
          <year>2005</year>
          )
          <article-title>Bayesian Semantics for the Semantic Web</article-title>
          .
          <source>Doctoral Thesis</source>
          , School of Information Technology and Engineering, George Mason University. Fairfax,
          <string-name>
            <surname>VA</surname>
          </string-name>
          , USA,
          <year>2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>R. N</given-names>
            <surname>Carvalho</surname>
          </string-name>
          (
          <year>2011</year>
          )
          <article-title>Probabilistic Ontology: Representation and Modeling Methodology</article-title>
          .
          <source>Doctoral Thesis</source>
          , School of Information Technology and Engineering, George Mason University. Fairfax,
          <string-name>
            <surname>VA</surname>
          </string-name>
          , USA,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          <article-title>Evaluation of Technologies for Uncertainty Reasoning Working Group (ETURWG) website</article-title>
          , http://eturwg.c4i.gmu.edu/?q=aboutUs , retrieved May 19,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>P. C. G.</given-names>
            <surname>Costa</surname>
          </string-name>
          ,
          <string-name>
            <surname>K. B. Laskey</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          <string-name>
            <surname>Blasch</surname>
            ,
            <given-names>A</given-names>
          </string-name>
          . Jousselme, “
          <article-title>Towards Unbiased Evaluation of Uncertainty Reasoning: The URREF Ontology”</article-title>
          ,
          <source>Proceedings of the 15th International Conference on Information Fusion</source>
          (To Be Published)
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>A. N.</given-names>
            <surname>Steinberg</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. L.</given-names>
            <surname>Bowman</surname>
          </string-name>
          , “
          <article-title>Revisions to the JDL Data Fusion Model,” Handbook of Multisensor Data Fusion: Theory and Practice (2nd ed)</article-title>
          , CRC Press, pp
          <fpage>45</fpage>
          -
          <lpage>68</lpage>
          ,
          <year>2009</year>
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>S.</given-names>
            <surname>Das</surname>
          </string-name>
          ,
          <string-name>
            <surname>High-Level Data</surname>
            <given-names>Fusion</given-names>
          </string-name>
          , Boston MA (USA):
          <source>Artech House</source>
          ,
          <year>2008</year>
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>J.</given-names>
            <surname>Sowa</surname>
          </string-name>
          , Knowledge Representation: Logical, Philosophical and
          <string-name>
            <given-names>Computational</given-names>
            <surname>Foundations</surname>
          </string-name>
          , Grove CA (USA): Brooks/Cole, Pacific,
          <year>2000</year>
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>[12] http://www.raytheon.com/capabilities/products/apg63_v3/</mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>[13] http://www.p3orion.nl/variants.html</mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>Air</given-names>
            <surname>Power</surname>
          </string-name>
          <article-title>Australia website</article-title>
          , http://www.ausairpower.net/APAAcquisition-GCI.
          <article-title>html#mozTocId55304, as retrieved on June 2, 2012</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>D. A.</given-names>
            <surname>Schum</surname>
          </string-name>
          ,
          <source>The Evidential Foundations of Probabilistic Reasoning</source>
          , New York: John Wiley and Sons, Inc.,
          <year>1994</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>D.</given-names>
            <surname>Schum</surname>
          </string-name>
          , “
          <source>Thoughts About a Science of Evidence</source>
          , ” University College London Studies of Evidence Science,
          <source>retrieved from 128.40.111</source>
          .250/evidence/content/Science.doc on June 2, 2012
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [17]
          <string-name>
            <surname>K.B Laskey</surname>
            ,
            <given-names>D. A.</given-names>
          </string-name>
          <string-name>
            <surname>Schum</surname>
          </string-name>
          , P. C. G Costa, T. Janssen, “Ontology of Evidence”,
          <source>Proceedings of the Third International Ontology for the Intelligence Community Conference (OIC</source>
          <year>2008</year>
          ), December 3-
          <issue>4</issue>
          ,
          <fpage>2008</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [18]
          <string-name>
            <surname>K.J. Laskey</surname>
          </string-name>
          ,
          <string-name>
            <surname>K. B. Laskey</surname>
            ,
            <given-names>P. C. G.</given-names>
          </string-name>
          <string-name>
            <surname>Costa</surname>
          </string-name>
          ,
          <string-name>
            <surname>M. M. Kokar</surname>
            ,
            <given-names>T.</given-names>
            Martin, T.
          </string-name>
          <string-name>
            <surname>Lukasiewicz</surname>
          </string-name>
          ,
          <source>Uncertainty Reasoning for the World Wide Web, W3C Incubator Group Report 31 March</source>
          <year>2008</year>
          . Retrieved from http://www.w3.org/2005/Incubator/urw3/XGR-urw3-20080331/
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>P.</given-names>
            <surname>Walley</surname>
          </string-name>
          , “
          <article-title>Measures of uncertainty in expert systems”</article-title>
          ,
          <source>Artificial Intelligence</source>
          ,
          <volume>83</volume>
          (
          <issue>1</issue>
          ),
          <source>May</source>
          <year>1996</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>58</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>B.</given-names>
            <surname>Khaleghi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Khamis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F. O.</given-names>
            <surname>Karray</surname>
          </string-name>
          ,
          <article-title>“Multi-sensor Data Fusion: A Review of the State-of-the-</article-title>
          <string-name>
            <surname>Art</surname>
            <given-names>”</given-names>
          </string-name>
          , Information
          <string-name>
            <surname>Fusion</surname>
          </string-name>
          (
          <year>2011</year>
          ), doi: 10.1016/j.inffus.
          <year>2011</year>
          .
          <volume>08</volume>
          .001
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [21]
          <string-name>
            <surname>K. B. Laskey</surname>
            ,
            <given-names>P. C. G.</given-names>
          </string-name>
          <string-name>
            <surname>Costa</surname>
          </string-name>
          , T. Janssen, “
          <article-title>Probabilistic Ontologies for Multi-INT Fusion”</article-title>
          in
          <source>Proceedings of the 2010 conference on Ontologies and Semantic Technologies for Intelligence</source>
          ,
          <year>2010</year>
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>A. N.</given-names>
            <surname>Steinberg</surname>
          </string-name>
          , “
          <article-title>Foundations of Situation and Threat Assessment” in Handbook of Multisensor Data Fusion: Theory and Practice (2nd ed)</article-title>
          , CRC Press, pp
          <fpage>437</fpage>
          -
          <lpage>502</lpage>
          ,
          <year>2009</year>
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [23]
          <string-name>
            <given-names>R. G.</given-names>
            <surname>Almond</surname>
          </string-name>
          , Graphical Belief Modeling, New York NY (USA):
          <article-title>Chapman</article-title>
          and Hall, 1995
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [24]
          <string-name>
            <given-names>P. P.</given-names>
            <surname>Shenoy</surname>
          </string-name>
          , “
          <article-title>Valuation-Based Systems for Bayesian Decision Analysis,”</article-title>
          <source>Operations Research</source>
          , pp
          <fpage>463</fpage>
          -
          <lpage>484</lpage>
          , Vol
          <volume>40</volume>
          , No 3,
          <string-name>
            <surname>May</surname>
            <given-names>-June</given-names>
          </string-name>
          <year>1992</year>
          ,
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [25]
          <string-name>
            <given-names>P.</given-names>
            <surname>Svensson</surname>
          </string-name>
          , “
          <article-title>On Reliability and Trustworthiness of High-Level Fusion Decision Support Systems: Basic Concepts and Possible Formal Methodologies”</article-title>
          ,
          <source>9th International Conference on Information Fusion</source>
          ,
          <source>Florence (Italy)</source>
          ,
          <fpage>10</fpage>
          -
          <issue>13</issue>
          <year>July 2006</year>
          , retrieved online from http://www.isif.org/fusion/proceedings/fusion06CD/Papers/51.pdf on
          <issue>May 6</issue>
          ,
          <fpage>2012</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [26]
          <string-name>
            <given-names>A.</given-names>
            <surname>Karlsson</surname>
          </string-name>
          ,
          <article-title>Dependable and Generic High-Level Algorithms for Information Fusion - Methods and Algorithms for Uncertainty Management</article-title>
          ,
          <source>Technical Report HS-IKI-TR-07-003</source>
          , University of Skovde,
          <article-title>retrieved 15 Sep 2011 from his</article-title>
          .divaportal.org/smash/get/diva2:2404/FULLTEXT01.
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [27]
          <string-name>
            <given-names>E.</given-names>
            <surname>Franconi</surname>
          </string-name>
          , Description Logic Tutorial Course, downloaded from http://www.inf.unibz.it/~franconi/dl/course/ on 1 May 2012
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>