<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>A. P. Dempster, Upper and Lower Probabilities Induced by a Multivalued Mapping, The Annals of
Mathematical Statistics</journal-title>
      </journal-title-group>
      <issn pub-type="ppub">1613-0073</issn>
    </journal-meta>
    <article-meta>
      <article-id pub-id-type="doi">10.1214/aoms/1177698950</article-id>
      <title-group>
        <article-title>Within the Threat Intelligence Domain</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Ritten Roothaert</string-name>
          <email>h.m.roothaert@vu.nl</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="editor">
          <string-name>Istanbul, Turkiye</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Vrije Universiteit Amsterdam</institution>
          ,
          <addr-line>De Boelelaan 1105, 1081 HV, Amsterdam</addr-line>
          ,
          <country country="NL">The Netherlands</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2025</year>
      </pub-date>
      <volume>38</volume>
      <issue>1967</issue>
      <fpage>325</fpage>
      <lpage>339</lpage>
      <abstract>
        <p>National intelligence agencies face the challenge of data overload as they analyse vast amounts of information to address national security threats. To help analysts manage this complexity and ensure decision-making compliance, the Threat Intelligence Decision Ontology (TIDO) was developed. However, a major limitation of TIDO is the absence of a formal representation for the inherent uncertainties in threat intelligence (TI) data, which stems from imperfect sources, biases, and incomplete information. This ongoing PhD research addresses this critical gap by investigating how to integrate a suitable uncertainty representation into the TIDO framework.</p>
      </abstract>
      <kwd-group>
        <kwd>uncertainty representation and reasoning</kwd>
        <kwd>threat intelligence</kwd>
        <kwd>interval probabilities</kwd>
        <kwd>belief function theory</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <sec id="sec-1-1">
        <title>Motivation.</title>
        <p>
          One of the main tasks of national intelligence agencies is to investigate and act against
organizations or persons that could pose a threat to the national security. During such investigations,
analysts must interpret and analyse incoming data and act accordingly. However, similar to other
data-driven domains, technological advances have created new sources of information that can be
used during investigations, increasing the amount of information available for decision making, and
in turn increasing the risk of data-overload [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ]. This data-overload not only complicates the ability
of TI analysts to asses the situation and make their decisions, but also the analysis of those decisions
and whether or not those decisions are compliant with the applicable legislation and internal policies.
To mitigate this challenge of data-overload, the Threat Intelligence Decision Ontology (TIDO) was
developed1. It is designed to support the analyst during the situation assessment and action selection
processes of the TI analysis while simultaneously capturing the decision trace that can be used for
further post-analysis and compliance checking.
        </p>
        <p>A major limitation of the TIDO ontology is in its current form, is that there is no specific representation
for the uncertainties involved in the decision making process. AI solutions and human experts will
always have biases or imperfect models, sources might be poor, or misleading, and data-sharing
might be limited for operational, strategic or legal reasons. Consequently, at each step of Threat
https://ritten11.github.io/ (R. Roothaert)
CEUR</p>
        <p>ceur-ws.org</p>
        <p>
          Intelligence Hybrid Workflow (TIHW), the analyst must revise this limited and uncertain information
and recommend actions [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ]. To do so, a suitable representation of the uncertainty associated with the
information and considerations used within the TI decision process is needed.
        </p>
        <p>
          Uncertainty representation is a broad and complicated field, ranging from quantitative representations,
often expressed using probabilities, to qualitative representations, often expressed in natural language,
and ordinal representations, which are used to express that one statement is more likely to be true that
another. For expressing the uncertainties in the TIDO ontology, we focus on the quantitative uncertainty
representations as we aim to propagate uncertainties throughout the TIHW using a formalized and
sound theoretical background, capable of expressing how much more likely one statement is over
another. Here, a pure probabilistic-based perspective would work well when high quality and complete
information is available. However, this assumption generally doesn’t hold in the domain of TI [
          <xref ref-type="bibr" rid="ref3 ref4">3, 4</xref>
          ].
This concern has sparked several investigations into alternative methods to represent uncertainty,
which can be categorized into various groups. According to [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ], there are five main groups: (1) general
probability theory, (2) interval probabilities, (3) probability bound analysis, (4) belief function theory, and
(5) possibility theory. Although this is not an exhaustive list, it provides a manageable number of widely
used approaches with which to start our study. Each of these groups comes with diferent prerequisites,
assumptions, and uncertainty interpretations, and thus the decision to opt for one representation over
the other requires a careful consideration. To keep the number of comparisons manageable, we decided
to focus our attention on general probability theory, interval probabilities, and belief function theory.
Probability bound analysis was excluded as it can be considered a combination of general probability
theory and interval probabilities, and possibility theory was excluded as it is considered to be subsumed
by belief function theory [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ].
        </p>
        <p>Ethical considerations The author is aware of the sensitivity of the subject and the potential
implications with respect to the privacy of citizens. The motivation, though, is not to collect new data
but to provide a vocabulary to enrich and structure data already obtained by the intelligence services.
This enables the decision makers to make their assumptions and considerations explicit, increasing
their accountability and improving the testability of their decisions with respect to compliance with
legislation and internal policies. The proposed research does not provide intelligence agencies with
additional capabilities, nor justifications, to collect more data, and therefore, does not impact the privacy
of citizens.</p>
        <p>Approach and goals To determine which uncertainty representation is most suitable for the TI
domain, we split the problem into four research questions: (RQ1) What are the characteristics of the
data used for decision making within the domain of TI? (RQ2) What are the benefits and limitations of
the selected uncertainty representations? (RQ3) How can the data characteristics identified in RQ1 be
captured by the representations analysed in RQ2? And finally, ( RQ4) what tooling is available for the
analysed uncertainty representations? RQ1 is assessed through a literature analysis, in combination
with findings derived from a focus group with domain experts. RQ2 will be assessed solely though
a literature analysis, and RQ3 will be addressed by means of a comparison between the answers of
RQ1 and RQ2. The final question, RQ4, is briefly touched upon in Section 3.4, but we hope to obtain
additional suggestions during the discussions at the RuleML+RR doctoral consortium.
Organisation Section 2 introduces the TIDO ontology, and explains the core functionalities of the
ontology using an example of a common decision made within the TI domain. Section 3 provides an
analysis of the characteristics of the data used within the TI domain, along with an in indication on
how these characteristics manifest themself within the TIDO ontology, or how a future iteration of
TIDO could capture these characteristics. Section 4 provides an overview of the current progress and
individual contributions of the student attending this doctoral consortium, as well as brief description
of the potential impact of the overall project.</p>
      </sec>
    </sec>
    <sec id="sec-2">
      <title>2. The Threat Intelligence Decision Ontology</title>
      <p>The Threat Intelligence Decision Ontology (TIDO) was developed to provide a vocabulary that can
be used to describe the decision processes within the TI domain, together with the information that was
used to come to those decisions. This ontology is an ongoing work, and at the moment of submission
of this doctoral consortium paper, the paper accompanying the TIDO ontology is under review at the
Thirteenth International Conference on Knowledge Capture (K-CAP 2025). Therefore, this introduction
for the TIDO ontology will not focus on the development of the ontology, but instead focus on the
main ideas and provide an intuitive explanation on how the ontology can be used to describe decision
process in the TI domain. In section 4, this example will also be used to describe our current ideas on
how to extend the TIDO ontology to allow for the expression of uncertainty.</p>
      <p>
        Figure 1 depicts the TIDO ontology. This ontology is an extension of the PROV ontology [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], and
directly inherits the relationship between entities, in the TI context interpreted as pieces of information,
activities and agents. The TIDO ontology adds four additional modules to the PROV ontology:
TIDO-Process: The process module provides a vocabulary to describe diferent steps in the procedural
element of decision making. These steps are derived from Bales’ phases in group decision making
[
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], and while decision process generally follow the patter of an :Orientation-step, followed by
a :Evaluation-step and finished with a :Resolution-step, real decision processes may deviate
from this pattern and therefore the TIDO ontology makes no assumptions on the ordering of
these steps.
      </p>
      <p>TIDO-Sense: The sense-making module provides a vocabulary to describe the relationship between
in@base: https://w3id.org/tido#
rdfs: http://www.w3.org/2000/01/rdf-schema#
prov: http://www.w3.org/ns/prov#
urref: http://eturwg.c4i.gmu.edu/files/ontologies/URREF_v5_dev.owl#
:was:SwealesEctxeedcBuytedBy
:hasConsideration
:Cost
:Goal
:Resolution
:Case
:investigates [1..N]</p>
      <p>:questions
&lt;&lt;rdfs:subPropertyOf&gt;&gt;</p>
      <p>:supports
:providesInsightsInto</p>
      <p>:disputes
&lt;&lt;rdfs:subPropertyOf&gt;&gt;
:Evidence
:answers
:RQ
⊥
:Hypothesis
:
a
s
s
u
m
e
s
:informs</p>
      <p>:Option
:Consideration
prov:wasGeneratedBy [1..1]
prov:wasGeneratedBy [1..1]
prov:wasGeneratedBy [1..1]
prov:wasGeneratedBy [1..1]
:wasRecalledBy</p>
      <p>:Orientation
:contributesTo [1..N]
prov:Activity
:Evaluation
:Activity
prov:Collection
prov:wasMemberOf [1..N]
:hasContext
:PieceOfInformation</p>
      <p>prov:Entity
urref:Information</p>
      <p>:Agent</p>
      <p>
        formation, evidence and hypotheses. Here, the class :PieceOfInformation adopts the very broad
interpretation of information in the URREF ontology [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ], meaning it can range from anything to a
sensory measurement, a fact, common sense, or an uncertainty statement. However, for practical
purposes, the remainder of this paper will interpret the :PieceOfInformation class as anything
that can be expressed in a natural language sentence. Both the :Evidence and :Hypothesis
classes are sub-classes of :PieceOfInformation. Here, the :Evidence class describes
testimonies, observations and sensory measurements, whereas :Hypothesis provides an
interpretation of these testimonies, observations and sensory measurements. Additionally, other pieces
of information can be indicated to provide insights into instantiations of the :Hypothesis class,
with either a positive :supports, a negative :disputes, or a neutral :providesInsightsInto
relation. This is not possible for instantiations of the :Evidence class.
      </p>
      <p>TIDO-Option: The option module provides the vocabular to describe which options in the decision
process were considered, which considerations are associated with each option, and which option
was selected during a :Resolution step. Note that both the :Option and :Consideration
classes are subclasses from the :PieceOfInformation class, meaning that instances of either
class can also be used to provide insights into instances of the :Hypothesis class.
TIDO-Case: The case module is used do describe what is being investigated. Here, the :Case class,
modelled as a sub-class of prov:Collection, is a collection which contains all the pieces of
information that bear some degree of relevance to the investigation. The :RQ class contains all
the research questions that are investigated in the decision process.</p>
      <p>
        Figure 2 provides an example of how the TIDO ontology can be used to describe a decision process
in the TI domain. This example concerns the start of an investigation by the Dutch Civil Intelligence
and Security Service (CISS) into a potential threat and is derived from the first episode of a podcast that
was co-produced by the CISS [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. In the first step (  1) depicted in this example, agent  2 recalls a set
of evidence  that has been produced by agent  1 at an earlier point in time. Afterwards in  2, after
      </p>
      <p>prov:wasInformedByprov:wasGeneratedByprotvid:woa:asnInsfwoermrsedBpyrov:wasGeneratedByprov:wasInformedBpyrov:wasGeneratedBy
"A report was received
by the CISS front-office"
"The report originates rdf:value</p>
      <p>from acitcizoennc"erned rdf:value
"The concerned citizen</p>
      <p>teaecnhgeinsecehreinmgi"cal rdf:value
"During a practical, the
noctiocnecdetrhnaetdocnietizoefnhis rdf:value
students was asking
many questions about
substances that could
be used to create an
explosive"</p>
      <p>rdf:value
"Should we be
concerned?"
i:tttrrvsoapbuedoTAw ttiiddoo::ssuuppppoortrittdsso:supportstido:supportsi:tttrrvsapuboedoTAw itt:trrvspoabuedoTAw
prov:wasAttributedTo
rd lf:rdeu l:frvdaeu it:trsspopuo it:trssopupdo</p>
      <p>d
v
a
l:f
v
a
u
e
"There is a direct
cause of concern"
"This situation could
eventually lead to a
cause of concern."</p>
      <p>"There is not
sufficient reason to
be concerned"
having read the presented evidence, agent  2 drafts the question  that he or she will try to answer. In
 3, a set of potential hypotheses  is drafted that could answer  , and either supporting or disputing
connections are made between the pieces of evidence in set  and the hypotheses in set  . In the final
step ( 4), agent  2 uses his or her common sense and expert knowledge to draft a set of considerations
( ) that also either support or dispute the hypotheses in  .</p>
    </sec>
    <sec id="sec-3">
      <title>3. Uncertainty Representation (for Threat Intelligence)</title>
      <p>
        The TIDO ontology provides a vocabulary to represent the pieces of information, activities and agents
that are part of a decision process in the TI domain. However, as mentioned in Section 1, the TIDO
ontology does not have a dedicated vocabulary to expressing uncertainties associated with the decisions
made within a TIHW. In order to determine which uncertainty representation is most appropriate,
Section 3.1 first address RQ1 and characterize the data used within the TI domain. This analysis was
primarily derived from Miller [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], van Gerwen [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], and the Joint Dutch Doctrine for Intelligence from the
Dutch Ministry of Defence [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. Afterwards, Section 3.2 addresses RQ2 and introduces three potential
uncertainty representations, where the notation and the main benefits and limitations are primarily
adopted from Zio and Pedroni [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. Section 3.3 provides preliminary ideas on how the uncertainty
representations from 3.2 and be used to represent the data described in 3.1, providing a beginning point
for answering RQ3. As this is ongoing work, a prioritisation is made on which components will be
addressed first. Finally, Section 3.4 provides a brief overview of RDF compatible tools that would enable
the reasoning capabilities of the to-be-selected uncertainty representation.
      </p>
      <sec id="sec-3-1">
        <title>3.1. Data Characterisation</title>
        <p>
          Miller states that ‘intelligence’ in the TI domain is an epistemic product of some process of analysis
and evaluation of information that is done with respect to diferent criteria, including the likelihood that
it is true, its importance, and relatedly, the reliability of the source. These evaluation criteria are less
concrete and inherently contain some degree of subjectivity, complicating their formalisation. Starting
with the ‘likelihood of information being true’, a distinction should be made according to the multiple
ways a ‘likelihood’ can be interpreted. This is important as the meaning of likelihood in our specific
context determines how it should be represented in a model. Using an inappropriate categorization
may result in a under- or overestimation of the risk or threat [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ]. For the remainder of this manuscript,
we will use the term certainty to describe the belief in favour of a proposition and the term uncertainty
the describe the lack of a belief for a proposition.
        </p>
        <p>First, a distinction should be made in the nature of the uncertainty, where aleatoric uncertainty is
the uncertainty that originates from a random process (e.g. a coin toss), and epistemic uncertainty is
the uncertainty that originates from a lack of knowledge (e.g. limited sensor observability). In the
TI domain, both types occur, with human behaviour begin a good example for a source of aleatoric
uncertainty, and missing information concerning a target’s current location being a good example for
a source of epistemic uncertainty. Here, the uncertainty regarding the location of the target can be
resolved by gathering more intelligence regarding the location of the target, whereas the behaviour of
the target will always contain some element of randomness, regardless of the quality of the analyst’s
prediction.</p>
        <p>Secondly, there are diferent ways in how uncertainty is derived. The objective approach uses models
to calculate the uncertainty, which are generally grounded by data from past events. Generally, these
models focus on representing the aleatoric component of uncertainty, but they are also capable of
integrating epistemic uncertainty, such as some imperfections of sensors, as long as this uncertainty
can be quantified. The subjective approach is, as the name already suggests, a less well-defined way of
quantifying the uncertainty and is usually performed by domain experts, leaving room for multiple
interpretations. Generally, this type of uncertainty quantification is associated with epistemic
uncertainty as such quantifications are generally made when no suficiently accurate model exists, or when
data from past events is missing. However, the subjective approach can also be used to express forms of
aleatoric uncertainty, such as the previously mentioned example of predicting human behaviour. Again,
in the TI domain, both types of uncertainty quantification are used. For instance, the uncertainty of an
image classification algorithm may be expressed using a objective approach, whereas the motives of a
target are better suited to be represented using a subjective approach.</p>
        <p>
          Third, there is the way the uncertainty is interpreted. The probabilistic interpretation focusses on
representing uncertainty as the probability of a specific event occurring. Here, probability is defined as
‘the fraction of times an event A occurs if the situation considered were repeated an infinite number of times ’
[
          <xref ref-type="bibr" rid="ref3">3</xref>
          ]. An example here would be the classical coin flip, which will land on its head every 1 out of 2 times
on average when flipped  times, or the expected number of employees that will click on a phishing
link if distributed among  employees. The alternative, the non-probabilistic interpretation, represents
uncertainty as ‘that what is unknown’. This interpretation suggests that we can only believe things that
are true based on the evidence that supports it. For everything else we don’t have evidence for, we don’t
know whether it is true of false. An example of a non-probabilistic uncertainty is the assignment of an
individual being considered a threat or not according to an assessor. Here, a 0.6 could indicate that the
assessor has a 0.6 degree of confidence in the individual being a threat and 0.4 expresses the assessor’s
epistemic uncertainty about the claim. This interpretation aligns with the open-world assumption:
unknown facts may exist, so a lack of evidence for a hypothesis does not imply evidence against it.
        </p>
        <p>The fourth dimension is whether the uncertainty is ergodic or non-ergodic. Here, ergodicity refers to
whether the true (aleatoric) uncertainty of the system can be measured by taking the average deviation
of the mean value over time. In an ergodic system, the value of system parameters (and their variance)
does not vary over time or the phase of the system. This implies that the uncertainty derived from the
analysis of past observations also generalizes to the current state of the system. Within the domain of
TI, most uncertainties are expected to be to some extent non-ergodic. For example, the behavioural
patterns of individuals or organisations might change over time, for instance by the discovery of a
new strategy or the rise of new technologies. The previously mentioned image classifier would be an
example of an ergodic uncertainty, provided that the target variable remains constant2.</p>
        <p>
          Besides the ‘likelihood of being true’, Miller mentions that information should also be evaluated
according to its importance. Here, it is helpful to make a distinction between importance and relevance.
The diference between intelligence and information is that intelligence is considered to be ‘information
or data (expressible as a statement or, more likely, structured set of statements) that are acquired
for various institutional purposes’[
          <xref ref-type="bibr" rid="ref4">4</xref>
          ]. Within the domain of TI, this purpose is to assess whether a
particular individual or organisation should be considered a threat to the national security. The example
given by Miller describing relevance is that if an arbitrary individual breaks a leg, this information
is not relevant for (most) case investigations, but if this individual is the same as the target that is
investigated, this information suddenly becomes very relevant. Importance however, we interpret to be
more closely associated with the question that an analyst is trying to answer. Here, the information
that the target has broken its leg is less important when creating an overview of the social network of
the target, whereas it is very important when assessing the capabilities or the expected location of the
target. Importance in our interpretation therefore carries a higher degree of context dependence. From
a formal point of view, we can say that we use the term relevance to refer to the relevance of a piece of
information with respect to the case, and the term importance to refer to the relevance of a piece of
information with respect to a research question.
        </p>
        <p>The final evaluation criteria mentioned by Miller is reliability of the source. Again, no operationalized
description of the term ‘reliability’ is given, but we interpret it as the degree or probability with which
can be assumed that the information received from this source is correct. Referring back to the example
given in Figure 2, the set of evidence  is provided to agent  2 by agent  1. If  2 has past experiences or
background information that could indicate that evidence provided by  1 could have been manipulated
or misrepresented, agent  2 could adjust a reliability score of the agent  1. In turn, this reliability score
2If an image classifier is used to classify pictures containing cars, the interpretation of the concept ‘car’ should not change. If at
some point in the future flying cars are invented, changing our interpretation of what a car should look like, the uncertainty
associated with our car-classifier is no longer ergodic.
could be used as a discount or correction factor to the degree of uncertainty associated with the evidence
set  .</p>
        <p>
          The previous formalisations focussed on representing the characteristics of the data used within the
TI domain. However, what is missing is which data is useful for experts in the TI domain. To address
this element, we re-analysed the competency questions (CQs) of the TIDO ontology [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ] that were
produced by domain experts during a focus group conducted for the development of the TIDO ontology,
which was conducted using standard ontology development practices3. Out of the 14 in-scope CQs
that are not or only partially answerable by TIDO, 4 questions concern the identification of missing
information. This highlights that the uncertainty representation should not only focus on depicting the
information that is known, but also which information is not known. In other terms, it is important
to quantify the ignorance associated with both the questions and the hypotheses considered in the
investigation. For the questions it is important to know whether the set of considered hypothesis covers
the full set of plausible answers, and and for the hypothesis it is important to know whether these is
missing information that could help in assessing the likelihood of the hypothesis being true.
        </p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Description uncertainty representations</title>
        <p>
          Broadly speaking, there are two main ‘schools’ in uncertainty representation, diverting in how
uncertainty is interpreted. There are those that adopt the ‘probability’ perspective and aim to map every
type of uncertainty to the well-defined ‘frequentist’ interpretation of uncertainty. This has the benefit
of being compatible with the powerful Bayesian uncertainty aggregation/reasoning paradigm, but the
downside that every expression of uncertainty must be quantified in some form of a frequency
distribution which is often dificult and sometimes impossible. The other school adopts a ‘non-probabilistic’
perspective, which has a less rigid representation of uncertainty and can therefore be considered more
expressive. However, the downside here is that a custom uncertainty reasoning/aggregation paradigm
is needed, depending on the expressivity required for the application. Each of these ‘schools’ have
diferent flavours and providing a complete overview is well beyond the scope of this paper. Instead,
we will focus on the main characteristics of some variants that are particularly relevant for the task of
risk analysis. Additionally, we will focus only on uncertainty representations where the possible values
of a random value of interest can be described using a discrete set of elements. For a more detailed
description of the discussed uncertainty representations, along with how they might be applied to
continuous variables, we refer to [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ].
        </p>
        <sec id="sec-3-2-1">
          <title>3.2.1. Probability Theory</title>
          <p>
            Probability theory adopts the frequentist perspective on uncertainty and assumes that all variables
within the system can be described by a probability distribution function (PDF)   ( ) ∶ Ω → [
            <xref ref-type="bibr" rid="ref1">0, 1</xref>
            ] ,
where Ω is the sample space of all the values that a discrete random variable  can assume. Here,
∑∈Ω   ( ) = 1 , meaning that the sum of of the PDF over all possible values in Ω is equal to 1. The
probability  () of any measurable subset  of Ω, called an event, can then be described by
 () =
∑   ( )
∈
(1)
.
          </p>
          <p>Example. —- Unfortunately, an example for a potential extension to TIDO using general probability
theory is still ongoing work
—</p>
          <p>
            Benefits:
• It is relatively simple to implement and explain [
            <xref ref-type="bibr" rid="ref3">3</xref>
            ].
• It can use information about correlations between variables using conditional probability
distribution functions and Bayes’ rule [
            <xref ref-type="bibr" rid="ref3">3</xref>
            ].
3A specific description of these methodologies is outside the scope of this work, but is discussed in other work: w3id.org/tido
• The uncertainty assessments from the expert must be numerical, even when the expert is not
able to provide an exact value. This results in the analysis being forced to make subjective and
often unjustified assumptions and guesses [
            <xref ref-type="bibr" rid="ref3">3</xref>
            ].
• Propagating/aggregating the uncertainty requires complex knowledge concerning the correlations
between variables. Determining these correlations can be a time and resource intensive endeavour
[
            <xref ref-type="bibr" rid="ref3">3</xref>
            ].
• It confounds ignorance with variability, making it dificult to distinguish between aleatoric and
epistemic uncertainty [
            <xref ref-type="bibr" rid="ref3">3</xref>
            ].
          </p>
        </sec>
        <sec id="sec-3-2-2">
          <title>3.2.2. Interval Probabilities</title>
          <p>Interval probabilities, a form of imprecise probabilities, address the previously mentioned limitation
that probabilities need to be assigned the to possible values  ∈ Ω of discrete random variable  as
a single numerical value. Instead, the likelihood of a subset  ⊆ Ω being true is represented using a
lower probability  () and an upper probability () , creating a probability interval [ (), ()] where
0 ≤  () ≤ () ≤ 1 . The diference
Δ () =
() − 
()
is referred to as the imprecision in the representation of  . The single valued probabilities are the same
as the special case when  () = () such that Δ () = 0 .</p>
          <p>Example. —- Unfortunately, an example for a potential extension to TIDO using interval probabilities is
still ongoing work
—</p>
          <p>
            Benefits:
• Imprecise probabilities are still relatively simple to implement and explain [
            <xref ref-type="bibr" rid="ref3">3</xref>
            ].
• No matter what the distribution of the data, or the correlations within it, imprecise probabilities
will put sure bounds on the upper and lower limit of the uncertainties of each variable [
            <xref ref-type="bibr" rid="ref3">3</xref>
            ].
• It is not necessary to make assumptions on the probability distributions of random variables [
            <xref ref-type="bibr" rid="ref13">13</xref>
            ].
• Imprecise probabilities, and by extension interval probabilities, are completely based on classical
probability theory and can be regarded as its generalization [
            <xref ref-type="bibr" rid="ref13">13</xref>
            ].
          </p>
          <p>
            Limitations:
• The imprecision, i.e. the diference between the upper and lower bound on the uncertainty, can
grow very quickly, creating an increasingly more conservative estimate as arithmetic operations
are applied to the imprecise probabilities [
            <xref ref-type="bibr" rid="ref3">3</xref>
            ].
• While the absence of assumptions on the distribution between the upper and lower limit of the
uncertainty is considered a benefit, a consequence of this is that the basic form of imprecise
probabilities is not able to utilize this distribution if the data is available. Even when there are
clear intuitions on what the true uncertainty value is, this intuition cannot be captured by only
representing the upper and lower limit of the uncertainty [
            <xref ref-type="bibr" rid="ref3">3</xref>
            ].
          </p>
        </sec>
        <sec id="sec-3-2-3">
          <title>3.2.3. Belief Function Theory</title>
          <p>
            Belief Function Theory (BFT), also known as evidence theory or Dempster-Shafer theory [14, 15], is
intended for situations where there is more information available than what can be used for interval
probabilities, but not enough information to a specific probability distribution function [
            <xref ref-type="bibr" rid="ref3">3</xref>
            ]. Instead,
BFT allows for the incorporation and representation of incomplete information.
          </p>
          <p>
            In BFT, uncertainty is represented using a basic probability assignment (BPA) in the form of a mass
distribution () over all sets  in the power set  () , where  is the set of possible values a variable 
might take. This representation of uncertainty satisfies the following requirements:
 ∶  () → [
            <xref ref-type="bibr" rid="ref1">0, 1</xref>
            ], (∅) = 0,
          </p>
          <p>∑ () = 1
∈ ()
Using this representation, the belief, plausibility and commonality measures are defined by
() =
∑ (),  () =
⊆</p>
          <p>
            ∑ (), () =
∩≠∅
∑ (),
⊇
∀ ∈  ()
(2)
(3)
(4)
(5)
Example. To give an explanation on how to interpret these measures, let us have another look at the
example given in Figure 2. Let us assume that in this example, the answer to  is the variable that we
wish to evaluate, and  is the set of possible values the answer to  might take. Then, we can define
a mass function   ∶  ( ) → [
            <xref ref-type="bibr" rid="ref1">0, 1</xref>
            ] . Potential knowledge elicitation methods to obtain this mass
function from an expert are discussed in Section 3.3, but for now, let us make an educated guess of what
such a mass function could look like. Taking a closer look at the tido:support relations in Figure 2,
we see that the pieces of evidence  2 and  3 and consideration  1 support that ℎ2 is the correct answer to
 . Considering that the biggest set of available information only supports ℎ2, we can assign a relatively
large mass to the set {ℎ2}, so suppose this value is   ({ℎ2}) = 0.4. Piece of evidence  4 supports both ℎ1
and ℎ2. Looking at the content of  4, we could argue that this is the most important piece of information
when evaluating the possible answers to  , hence we can also assign a relatively high mass to the set
{ℎ1, ℎ2}, so suppose the mass is   ({ℎ1, ℎ2}) = 0.3. Consideration  2 supports ℎ3, but arguably, this piece
of information is less important than  4. Therefore, we can assign a smaller mass to the set {ℎ3}, so
suppose that   ({ℎ3}) = 0.1. Finally, we can also model that there is still some degree of ignorance
over the complete set of hypothesis  , where the analyst cannot distinguish between any potential
answer to  . Here, suppose that the mass to this ignorance set is   ( ) = 0.2 . As there is no evidence
to support any other set from the power set  ( ) , the mass for all other sets is equal to 0.
          </p>
          <p>To summarize, the mass function   (⋅) would take the following values:
  () =
⎧0.2, if  = 
⎪⎪0.3, if  = {ℎ 1, ℎ2}</p>
          <p>0.4, if  = {ℎ 2}
⎨
⎪0.1, if  = {ℎ 3}
⎪
⎩0, otherwise</p>
          <p>Now, let us revisit the belief, plausibility and commonality measures defined in 3. In the current
example, belief ((⋅) ) is interpreted as the degree of certainty to which the analyst believes that, based
on the available evidence in set  and his considerations in set  , the true answer to  is assigned to the
set  or a subset of  . The plausibility measure ( (⋅) ) can be interpreted as the degree of certainty to
which evidence  and considerations  assign the answer for  to any set  in  () that overlaps with
 . Finally, the commonality measure ((⋅) ) interpreted as the degree of certainty to which evidence 
and considerations  assign the answer for  to  or a superset of  .</p>
          <p>Combining 3 with 4, we get the following values for the belief, plausibility and commonality functions:
({ℎ
({ℎ
({ℎ
( ) = 1,
1, ℎ2}) = 0.9,
1, ℎ3}) = 0.1,
2, ℎ3}) = 0.4,
({ℎ
({ℎ
({ℎ</p>
          <p>Looking at the values in 5, we see that out of the singleton values ℎ1, ℎ2 and ℎ3, the belief, plausibility
and commonality measures for ℎ2 are the highest, indicating that ℎ2 would most likely be the best
answer to  . However, there is still a gap between ({ℎ 2}) and  ({ℎ 2}), indicating that there is still a
relatively high degree of epistemic uncertainty associated with ℎ2. Therefore, an analyst might decide
to first investigate the case further to lower his or her degree of epistemic uncertainty before making a
definitive decision on what he or she considers to be the best answer to  .</p>
          <p>
            Benefits:
• The BPA can be constructed with almost any kind of data. For instance, regular probabilities
can be assigned to the singleton values in  , and complete ignorance can be modelled by setting
() = 1 [
            <xref ref-type="bibr" rid="ref3">3</xref>
            ].
• BFT encompassed probability theory when the BPA is only defined over singletons or disjoint
sets [
            <xref ref-type="bibr" rid="ref3">3</xref>
            ].
• It produces bounds that get narrower with better empirical information [
            <xref ref-type="bibr" rid="ref3">3</xref>
            ].
          </p>
          <p>
            • Again, it is relatively simple to implement [
            <xref ref-type="bibr" rid="ref3">3</xref>
            ].
          </p>
          <p>
            Limitations:
• When combining multiple BPAs, not only should you be cautious of dependencies between the
used evidence, the shape of the mass functions should also be considered when selecting an
appropriate combination rule. For instance, one of the most popular rules to do so, Dempster’s
rule, produces unintuitive results when faced with contradicting mass functions [
            <xref ref-type="bibr" rid="ref3">3, 16</xref>
            ].
• The output measures of belief, plausibility and commonality tend to be more dificult to translate
to specific decision points than the standard probability over the singletons. A translation is
needed that can transform the beliefs from a credal level to a pignistic level [17]. Smets introduces
such a translation in [17], but other translations exist as well [18].
          </p>
        </sec>
      </sec>
      <sec id="sec-3-3">
        <title>3.3. Future Steps</title>
        <p>The presented work in the previous section provides a preliminary idea of how a potential uncertainty
representation might be integrated into the TIDO ontology, focussing on how the uncertainty of
hypotheses derived within a TIHW could be represented. We consider this to be the first step answering
RQ3, but other steps are still to be taken. We have roughly divided the required extensions to TIDO
into five steps, as depicted in Figure 3, and we plan on addressing these steps one at the time.</p>
        <sec id="sec-3-3-1">
          <title>Step 1: Uncertainty Representation over the Hypotheses Set</title>
          <p>
            We believe that the expression over the uncertainty over the available hypotheses should be the first
point addressed in the uncertainty representation of TIDO as this represents how the analyst has
interpreted the available information and provides the foundation for follow-up actions. Being certain
about a hypothesis, or a group of hypotheses, being true could be used as an argument for mitigating
actions, where as a high degree of uncertainty could be used as an argument to investigate further. The
example provided in Section 3.2.3 provides an indication of what an uncertainty representation for the
hypothesis set might look like. However, the following points should also be addressed before step 1
can be considered finished:
Finish examples: As of yet, only the example for BFT has been worked out. Before a proper
comparison can be made between BFT and basic probability theory and interval probabilities, the
examples for these cases will need to be worked out as well. Alternatively, we could focus our
attention on the probability bound analysis (PBA) paradigm, which combines basic probability
theory with interval probabilities [
            <xref ref-type="bibr" rid="ref3">3</xref>
            ] and already has some implementations in the field of threat
assessment [19, 20].
          </p>
          <p>Expert elicitation: For the example in Section 3.2.3, educated guesses were made on what a possible
mass function could look like given the data depicted in Figure 2. However, if the TIDO ontology
is to be used by domain experts, a quantified degree of belief over the available hypothesis must be
derived from the experts’ input. For the elicitation of mass functions, many methodologies exist,
such as the audit risk model [21], the Analystical Hierachy Process (AHP) [22], a pair-wise ranking
based procedure [23], and a methodology using Likert-scales [24]. Expert elicitation techniques
for obtaining subjective probabilities or probability intervals still need to be investigated. The
feasibility of implementing a suitable elicitation methodology should also play an important role
in the selection of the uncertainty representation for TIDO.</p>
          <p>Representation in a knowledge graph: In order to capture the data, it must somehow be stored in
a database. As the TIDO ontology is based on the Web Ontology Language (OWL), it is important
to think about how the uncertainty can represented such that it is compatible with the OWL
syntax, or more specifically the Resource Description Framework (RDF). For example, directly
implementing BFT would require mass values associated not only with the singletons in the set
of hypotheses  , but also with the elements in the powerset of  . Worst case scenario, every
element in the powerset of H would need a unique node in the graph, resulting in an exponential
growth in the amount of nodes in the KG with respect to the size of  . Not only could this
substantially increase the required storage capacity, it might also make it more dificult to retrieve
this information from the graph using SPARQL queries. Some OWL ontologies already exist to
represent uncertainties, and before designing a custom extension to TIDO, potential mappings
from TIDO to these ontologies should be investigated. For probability theory, PR-OWL [25] could
be an option and for BFT, BeliefOWL [26] could be an option. No OWL ontology was found for
probability intervals.</p>
        </sec>
        <sec id="sec-3-3-2">
          <title>Step 2: Conditional Probabilities or Belief Functions</title>
          <p>Once step 1 is completed, there is an uncertainty representation that allows a TI expert to represent their
belief about the uncertainties associated with the hypotheses in  , which, ideally, the expert derives
Stepphhhhr2oaaaavssssiIIIIdmmmm?ei:tttrrvsopabuedoTAwppppspooooCttriirrrrddoottttooaaaavn::nnssnnidduuccccippeteeeeippso????ooCntrri"ttdassSoc?oolhn:?psnodpurcrudirpoetlfodip:vrovvnoiawndreiltdueasdetee?lisbd?s"CoeC:soounpndpdio?tirittoisoi:tttrrvsoapuebdoTAwnna?al?i:tttrrsvoapbuedoTAwl? hi:tttrrvsopabuedoTAwhahhasaaPsssCCrCoeeebrrratttaaabiiininnlhihthtttyyayaya???s?shsCUaPosplaBmpueemslriiBebofionl?iuatylf:rvdaeulni?tdl:frvdauey??l:frveuad i:ttrssuopop it:trssopupod??"ecTc"v?ahaTeuiuhsnsseteseurieatooulflifayscctoilaooennnadccdiecreeortroncnut."la"d</p>
          <p>d
hasLowerBound? ?suff"iTciheenrtereisansoont to
? hasVariance? be concerned"
?</p>
          <p>?
tido:Evidence
from the pieces of evidence in  and the considerations in  . However, there is no vocabulary for the
expert to precisely express how these pieces of evidence and considerations influence the uncertainty
associated with the elements in  . This information could be very useful when determining to what
degree each piece of evidence or consideration contributed to the uncertainty distribution over the
hypothesis, and consequently, the final decision. The TIDO ontology does provide the opportunity for
the expert to express whether pieces of information support (using tido:supports) or dispute (using
tido:disputes) a hypothesis, but the efect of using these relations on the uncertainty associated
with the hypotheses in  is not formalized. To formalize this relationship, some form of a conditional
probability distributions (  (|⋅) ) or conditional belief masses (  (|⋅) ) would be needed, where  ∈
 ( ) and ⋅ would be a piece of information relevant for answering research question  .</p>
          <p>To implement this step, the same points need to be addressed as for step 1: What would a working
example look like for each of the selected uncertainty representations? How would we elicit the needed
numerical values from the experts? And how could this be represented in a KG? As of yet, these
points have not been investigated, although we expect to encounter challenges when addressing the
dependencies that exist within the available information. For example, the pieces of evidence  3 and  4
depicted in figure 2 show a clear dependency as  3 provides an indication that ‘concerned citizen’ in  4
is knowledgeable about explosive substances, increasing the degree to which  4 provides an indication
for hypothesis ℎ1.</p>
        </sec>
        <sec id="sec-3-3-3">
          <title>Step 3: Importance with Respect to the Research Question</title>
          <p>In Section 3.1 we discussed that we use the term relevance for refer to the relevance of pieces of
information to the case investigation, and the term importance to describe the relevance of a piece of
information with respect to the research question. Therefore, we can use formal representations of
relevance to model our notion of importance. For example, we can capture this context dependence by
introducing a distance measure, as it is commonly done in frameworks that need to deal with diferent
levels of relevance [27, 28]. In our case, we could define a distance between the pieces of information
used within the analysis and the research question.</p>
        </sec>
        <sec id="sec-3-3-4">
          <title>Step 4: Reliability of the Source</title>
          <p>
            Several options exist for accounting for the reliability of the source in a potential uncertainty
representation for TIDO. The author of [
            <xref ref-type="bibr" rid="ref13">13</xref>
            ] suggests to use the formalisms used to model imprecise probabilities to
model the reliability of a source instead. Alternatively, the authors of [29] describe how a combination
of the reliability of the source and the degree of disagreement within evidence can be used to compose
a discount factor for a mass function, similarly to how [30] uses the truthfulness of the source as a
correction factor.
          </p>
        </sec>
        <sec id="sec-3-3-5">
          <title>Step 5: Uncertainty Representation over information from External Sources</title>
          <p>The final step that warrants investigation is the representation of uncertainty over the evidence set  .
Both the hypotheses set  and the consideration set  are generated by agent  2, implying that agent  2
is also responsible for providing the uncertainties associated with the elements in  and  . However, the
elements in  are provided by a diferent agent  1. In theory, this agent could represent the uncertainties
associated with the pieces of information completely diferent from how  1 would represent these
uncertainties. For example,  1 could use interval probabilities to express its uncertainties, whereas  2
could us BFT. If  2 is to use the uncertainties provided by  1 to make assessments about the hypotheses
in  , the uncertainties from  1 would need to be translated to a representation compatible with the
uncertainty representation of  2. Following this line of reasoning, it would make sense for  2 to use
the most expressive form of uncertainty representation, but the possibility utilizing some form of a
translation function should be investigated as well.</p>
          <p>Start of the</p>
          <p>PhD</p>
          <p>Current position</p>
          <p>End of the</p>
          <p>PhD
Stage 1</p>
          <p>Stage 2</p>
          <p>Stage 3
March 2023</p>
          <p>March 2024</p>
          <p>March 2025</p>
          <p>March 2026</p>
          <p>March 2027
Familiarisation with the domain:
- Understanding the threat intelligence domain
- Determining a suitable use case
- Getting familiar with semantic technologies
Reasoning over uncertain data:
- Characterise uncertainty in TI domain
- Determine suitable uncertainty representation
- Extend previously developed ontology</p>
          <p>Vocabulary adaptation to selected use case:
- Requirement analysis
- Ontology development
- Ontology validation</p>
        </sec>
      </sec>
      <sec id="sec-3-4">
        <title>3.4. Available Tooling</title>
        <p>As mentioned in Section 1, our overview of the available tooling to represent and reason over
uncertainties is limited, and we hope to obtain interesting leads during the doctoral consortium. So far, with
respect to modelling probabilistic ontologies, we have found PR-OWL [25] and BayesOwl [31], which
both represent the uncertainties in the form of Bayesian networks. While either of these frameworks
appear to be suitable if we opt to represent the uncertainties in TIDO using standard probability theory,
these frameworks do not appear to be compatible with interval probabilities or BFT. With respect to
BFT, we only found a single position paper introducing the core concepts of BeliefOWL [26], but a
complete implementation of this ontology appears to be absent. For interval probabilities, we were
not able to find an published OWL-based ontology. Instead, we could investigate the usage of the
pba-for-python Python package [32] if we do not wish to implement such an ontology ourselves.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Approach, Progress and Conclusions</title>
      <p>The aim of the presented research is to develop a vocabulary capable of representing a Threat
Intelligence Hybrid Workflow (TIHW). The contribution of the student to this work is summarized in
three stages: (1) The start-up of the project, where the main goal is to get familiar with the domain,
(2) the adaptation of available vocabularies to a use-case identified during the first phase, and (3) the
integration of an uncertainty reasoning paradigm into the ontology developed in the second phase. A
timeline of these phases is shown in Figure 4.</p>
      <p>The first two phases can be considered as completed with the development of the TIDO ontology
described in Section 2. The remainder of the work presented in this consortium paper represents the
ifrst steps towards the third phase. In this phase, the student extended the domain analysis of the
TI domain with a characterisation of the data used within this domain, as presented in Section 3.1.
Afterwards, three diferent uncertainty representations, general probability theory, interval probabilities
and belief function theory, are analysed in Section 3.2. Based on this analysis, a first step is made in the
integration of an uncertainty representation into the TIDO ontology in Section 3.3.</p>
      <p>Up until now, our focus has mainly been on belief function theory, but the upcoming time will
be spend on drafting similar examples for the general probability theory and interval probability
implementations. Afterwards, we will investigate how to integrate conditional probabilities and belief
functions, our notion of importance with respect to the research question, the reliability of the source,
and the uncertainty representation for information originating from external sources. Additionally, we
still wish to investigate what tooling is available that allows for reasoning over uncertain information
using the formalisms we previously analysed.</p>
      <p>As for the potential achievements of this research, we believe that incorporating a suitable uncertainty
representation into the TIDO ontology would substantially improve its ability to accurately capture
the TIHW. As the decision processes within the TI domain are greatly influenced by the uncertainties
associated with the information used within these processes, an accurate representation of these
uncertainties would not only allow an analyst to make a better assessment of the situation, but also
allow for the reconstruction of the perceived uncertainties during an audit of such a TIHW.</p>
    </sec>
    <sec id="sec-5">
      <title>Acknowledgments</title>
      <p>This PhD project is supervised by Professor Fabio Massacci, and co-supervised by Associate Professor
Stefan Schlobach and Assistant Professor Lise Stork. Additionally, the author would like to thank Daira
Pinto Pietro for her discussions on how to possible integrations of belief function theory into the TIDO
ontology. Finally, we gratefully acknowledge the funding support by the Nederlandse Organisatie voor
Wetenschappelijk Onderzoek (NWO) under the KIC HEWSTI Project with grant no. KIC1.VE01.20.004.</p>
    </sec>
    <sec id="sec-6">
      <title>Declaration on Generative AI</title>
      <p>During the preparation of this work, the authors used Gemini 2.5 Flash in order to: Paraphrase and
reword, Grammar and spelling check. After using this tool, the authors reviewed and edited the content
as needed and take full responsibility for the publication’s content.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>S.</given-names>
            <surname>Van Gerwen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Constantino</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Roothaert</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Weerheijm</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Wagner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Pavlin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Klievink</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Schlobach</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Tuma</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Massacci</surname>
          </string-name>
          , To Know What You Do Not Know:
          <article-title>Challenges for Explainable AI for Security and Threat Intelligence</article-title>
          ,
          <source>in: Artificial Intelligence for Security</source>
          , Springer Nature Switzerland, Cham,
          <year>2024</year>
          , pp.
          <fpage>55</fpage>
          -
          <lpage>83</lpage>
          . doi:
          <volume>10</volume>
          .1007/978- 3-
          <fpage>031</fpage>
          - 57452-
          <issue>8</issue>
          _
          <fpage>4</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>F.</given-names>
            <surname>Massacci</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Papotti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Weerheijm</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>A. M. van Gerwen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Mezzi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. Constantino</given-names>
            <surname>Torres</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Roothaert</surname>
          </string-name>
          ,
          <source>Hybrid Explainable Workflows for Security and Threat Intelligence</source>
          ,
          <year>2023</year>
          . URL: https://www.nwo.nl/en/projects/kich1ve0120004.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>E.</given-names>
            <surname>Zio</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Pedroni</surname>
          </string-name>
          ,
          <article-title>Literature Review of Methods for Representing Uncertainty</article-title>
          ,
          <source>Technical Report</source>
          , Fondation pour une culture de sécurité industrielle,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>S.</given-names>
            <surname>Miller</surname>
          </string-name>
          ,
          <source>On National Security Intelligence</source>
          , 1 ed.,
          <source>Routledge</source>
          , London,
          <year>2024</year>
          , pp.
          <fpage>33</fpage>
          -
          <lpage>50</lpage>
          . doi:
          <volume>10</volume>
          . 4324/9781003106449- 3.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>S.</given-names>
            <surname>Chávez-Feria</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>García-Castro</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Poveda-Villalón</surname>
          </string-name>
          ,
          <article-title>Chowlk: From UML-Based Ontology Conceptualizations to OWL</article-title>
          , in: P. Groth,
          <string-name>
            <surname>M.-E. Vidal</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          <string-name>
            <surname>Suchanek</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Szekley</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Kapanipathi</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Pesquita</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          <string-name>
            <surname>Skaf-Molli</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          Tamper (Eds.),
          <source>The Semantic Web</source>
          , volume
          <volume>13261</volume>
          , Springer International Publishing, Cham,
          <year>2022</year>
          , pp.
          <fpage>338</fpage>
          -
          <lpage>352</lpage>
          . doi:
          <volume>10</volume>
          .1007/978- 3-
          <fpage>031</fpage>
          - 06981- 9_
          <fpage>20</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>T.</given-names>
            <surname>Lebo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Sahoo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>McGuinness</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Belhajjame</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Cheney</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Corsar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Garijo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Soiland-Reyes</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Zednik</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Zhao</surname>
          </string-name>
          ,
          <string-name>
            <surname>PROV-O: The PROV Ontology</surname>
          </string-name>
          ,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>R. F.</given-names>
            <surname>Bales</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F. L.</given-names>
            <surname>Strodtbeck</surname>
          </string-name>
          , Phases in group problem-solving.
          <volume>46</volume>
          (
          <year>1951</year>
          )
          <fpage>485</fpage>
          -
          <lpage>495</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>P.</given-names>
            <surname>Costa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.-L.</given-names>
            <surname>Jousselme</surname>
          </string-name>
          ,
          <string-name>
            <surname>K. B. Laskey</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          <string-name>
            <surname>Blasch</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          <string-name>
            <surname>Dragos</surname>
            ,
            <given-names>J. Ziegler,</given-names>
          </string-name>
          <article-title>URREF: Uncertainty representation and reasoning evaluation framework for information fusion</article-title>
          ,
          <source>Journal of Advances in Information Fusion</source>
          <volume>13</volume>
          (
          <year>2018</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>L.</given-names>
            <surname>Rasker</surname>
          </string-name>
          , De Dienst Podcast,
          <year>2021</year>
          . URL: https://podcastluisteren.nl/pod/De-Dienst.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>Dhr. P.A.</given-names>
            <surname>Brouwer</surname>
          </string-name>
          ,
          <string-name>
            <surname>Maj M. Scholten</surname>
          </string-name>
          ,
          <source>Joint Doctrine Publicate</source>
          <volume>2</volume>
          :
          <string-name>
            <surname>Inlichtingen</surname>
          </string-name>
          ,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>A. D. Kiureghian</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          <string-name>
            <surname>Ditlevsen</surname>
          </string-name>
          , Aleatory or epistemic? Does it matter?,
          <source>Structural Safety</source>
          <volume>31</volume>
          (
          <year>2009</year>
          )
          <fpage>105</fpage>
          -
          <lpage>112</lpage>
          . doi:
          <volume>10</volume>
          .1016/j.strusafe.
          <year>2008</year>
          .
          <volume>06</volume>
          .020.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>R.</given-names>
            <surname>Roothaert</surname>
          </string-name>
          ,
          <article-title>Supplementary materials used for the development of the TIDO ontology</article-title>
          ,
          <year>2025</year>
          . doi:
          <volume>10</volume>
          .5281/ZENODO.15400486.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>L. V.</given-names>
            <surname>Utkin</surname>
          </string-name>
          ,
          <source>IMPRECISE RELIABILITY: AN INTRODUCTORY REVIEW</source>
          ,
          <year>2003</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>