<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Using a General Prior Knowledge Graph to Improve Data-Driven Causal Network Learning</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Meghamala Sinha</string-name>
          <email>sinham@oregonstate.edu</email>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Stephen A. Ramsey</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Biomedical Sciences, Oregon State University</institution>
          ,
          <addr-line>Corvallis, OR 97331</addr-line>
          ,
          <country country="US">United States</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>In A. Martin, K. Hinkelmann</institution>
          ,
          <addr-line>H.-G. Fill, A. Gerber, D. Lenat, R. Stolle, F. van Harmelen (Eds.)</addr-line>
          ,
          <institution>Proceedings of the AAAI 2021 Spring Symposium on Combining Machine Learning and Knowledge Engineering (AAAI-MAKE 2021) - Stanford University</institution>
          ,
          <addr-line>Palo Alto, California</addr-line>
          ,
          <country country="US">USA</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>School of Electrical Engineering and Computer Science, Oregon State University</institution>
          ,
          <addr-line>Corvallis, OR 97331</addr-line>
          ,
          <country country="US">United States</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2021</year>
      </pub-date>
      <abstract>
        <p>We describe a method “Kg2Causal” for using a large-scale, general-purpose biomedical knowledge graph as a prior for data-driven causal network structure learning. Given a set of observed nodes in a dataset, and some relationship edges between the nodes derived from a knowledge graph, Kg2Causal uses the knowledge graph-derived edges to guide the data-driven inference of a causal Bayesian network. We tested Kg2Causal on several real-world biological datasets with known ground-truth networks and demonstrate improvement in network learning accuracy, relative to a baseline of an uninformative network structure prior. We also demonstrate the application of our method if data are collected under diferent experimental conditions including interventions on the observed variables.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Causal inference</kwd>
        <kwd>Structure learning</kwd>
        <kwd>Knowledge graph</kwd>
        <kwd>Informative prior</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        where structured prior knowledge is available. Without expert knowledge, standard network
inference approaches, by default, assume uniform (uninformative) prior which can lead to
erroneous relationships or relationship orientations both due to (i) size of space of networks and
(ii) degeneracy of Markov-equivalent networks. Proper incorporation of informative priors can
enhance model eficiency [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] and can also overcome weakness of smaller dataset.
      </p>
      <p>
        For most applications of causal modeling, some prior knowledge is available. For example, in
medicine, most cases have prior knowledge about etiology, symptoms, and treatment of
underlying diseases or conditions which can be obtained from biomedical literature or
knowledgebases. Although there is in general large scale availability of structured prior knowledge (for
example, ontologies) in various scientific domains, these mostly comprise disparate
information sources in various standards and formats, which poses a challenge to integrate them into
single structure. These problems motivated building of large multi-graphs called knowledge
graphs (KG) [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ] that incorporate structured knowledge from multiple sources within a
consistent schema. Knowledge graph is a term of art to mean a large graph-structured model to
store interlinked relationships between nodes representing concepts [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]. These large-scale
networks accommodate structural information which can be leveraged for reasoning,
recommendation or decision making. We hypothesized that combining information from structured
databases of general prior knowledge with causal modeling based on context-specific
multivariate measurements will improve accuracy of learned network compared to result of
datadriven causal modeling without incorporating prior knowledge.
      </p>
      <p>In this work, we propose a method,
“Kg2Causal”, for extracting relations
as pairs of nodes from a knowledge
graph, and for incorporating them as
priors on corresponding edges in a
score-based, data-driven causal network
learning method. In this study, prior
edges from knowledge graph are
ac1https://github.com/RTXteam/RTX/code/reasoningtool/kg-construction
that Kg2Causal had superior network learning accuracy to methods that do not use general
knowledge-base as network structure prior. Finally, we demonstrate (Sec. 4.3) the application
of Kg2Causal if data are collected under diferent experimental conditions including
interventions on the observed variables. We implemented “Kg2Causal” in the R programming language
(leveraging the bnLearn package [17]) and provide the code as open-source software 2.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related work and Background</title>
      <p>In this section, we describe Kg2Causal’s conceptual foundations including CBNs, score-based
causal modeling, interventions, and knowledge graph-based priors in network learning.</p>
      <sec id="sec-2-1">
        <title>2.1. Causal network: Brief Overview</title>
        <p>distribution  over variables</p>
        <p>
          as long as it satisfies two main assumptions:
set of variables (nodes) and  ⊂ 
A causal Bayesian Network [
          <xref ref-type="bibr" rid="ref1 ref2">1, 2</xref>
          ] is a DAG 
× 
denotes the causal relationships (edges). For an edge
= ( , 
), where 
= { 1, … ,   } denotes the
Pa(  ) to denote the set of parents of   . The conditional probability distribution  
(  ,   ), we say that   is a parent (cause) of   , and   is a child (efect) of   . We will use
defines
the probability of
        </p>
        <p>given the states of its parents Pa(  ). A causal network represents a joint
a) Causal Markov: Any given variable   is independent of its non-descendants, conditioned
on all of its direct causes. The assumption implies that the joint distribution  ( ) can be
factored as:  ( ) = ∏ =1   (  ∣ Pa(  )).</p>
        <p>pendence relation in  is entailed by the causal Markov assumption applied to  [18].</p>
        <p>b) Faithfulness: The joint distribution  ( 1, … ,   ) is faithful to  if every conditional
inde</p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Constructing a causal network</title>
        <p>
          Let us assume, we have a dataset  having observations over set of  variables. One of the main
classes of causal learning approaches is the Score-based approach which is derived from classic
Bayesian method where a scoring function evaluates the fit of graph  to data  [
          <xref ref-type="bibr" rid="ref6">16, 6</xref>
          ] with a
higher value indicating better fit. A search algorithm is used to explore the space of all possible
graphs, to maximize the scoring function. Typical heuristic algorithms used for this purpose
include hill-climbing or Tabu search approaches [14]. Other common score based methods are
GDS [19] and Gies [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ]. According to standard Bayesian rule, a causal graph  , is a 
from given data 
as  (
        </p>
        <p>∣  ) ∝  ( ) ( ∣  ), where  ( ) is prior distribution over space of
 . As described in Sec. 3, the Kg2Causal method incorporates a score-based approach.
all possible DAGs reflecting prior knowledge and  (
∣  ) is marginal likelihood of the data
learned</p>
      </sec>
      <sec id="sec-2-3">
        <title>2.3. Learning with interventions</title>
        <p>Interventions—external manipulations of nodes (“targets”) in a network—are important to
detect causal relations that can help disambiguate Markov equivalent sub-networks [16]. Let  
represent set of target nodes that are altered in interventional experiment  and   =  \  be
2https://github.com/meghasin/Kg2Causal
the complementary set of observational variables. Each intervention can have one or more
targets whose conditional probabilities are changed (so that, conditioned on intervention,
target variable’s distribution may depend only on a (possibly empty) subset of its parent
observables). Hence, each intervention results in deletion of arrows pointing towards the
intervened nodes. The joint distribution of  after intervention is  ( 1, ...,   ) =
(  ))⋅∏  ∈</p>
        <p>′
  (  ∣  
 
 
that</p>
        <p>
          is not a target node, and  ′(  ∣  
given its new set of parents  
′
(  )
(  )), where  (  ∣   (  )) is conditional probability similar to
 
′
(  )) is post-intervention conditional probability of
. For a so-called “perfect” intervention, one would set
∏  ∈ 
  (  ∣
, given
 
′
(  ) = ∅ [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ]. Score-based approaches are well-suited to mixed interventional-observational
datasets, in contrast to constraint-based approaches which are applicable to observational data.
        </p>
      </sec>
      <sec id="sec-2-4">
        <title>2.4. Incorporation of Priors</title>
        <p>In this subsection we introduce three types of uninformative priors on the network structure
 ( ), uniform prior, marginal prior, and Bayesian variable selection prior (VSP). We then
describe the knowledge graph-based prior that we use in Kg2Causal method. In cases for lack of
prior knowledge, default choice for prior  ( ) is a uniform prior distribution, as follows:
=
 ( ∪ {  ,   } ∣  )  ( ∪ {  ,   })  ( ∣  ∪ {  ,   })
 ( ∣  )
 ( )
 ( ∣  )
1/3, since we know that  (  ⇒V ) +  ( 
⇐   ) +  ( 
⇎   ) = 1
where nodes  
and   can have three possible cases   ⇒V (representing (  ,   )
∈  ),V ⇐V

So the probability for these edges are assigned as  (  ⇒V ) =  (  ⇐V ) =  ( 
(representing (  ,  
) ∈ E) or   ⇎V (no arc) and each have equal probability of occurrence.
. This implies
⇎   ) =
 (  ⇒
  ) +  ( 
favouring the propagation of false positives in  . Hence, its not always a good idea to use
uniform prior specially for cases where data is not too supportive of the DAG learned and
⇐   ) = 2/3, which means a higher promotion for the inclusion of new arcs and
where  is large. A better version of uniform prior is to use marginal probabilities instead,
where an independent prior can be assumed for each arc with same independent marginal
probabilities as uniform priors, also called marginal uniform [20]. In this case, the probability
of inclusion of each edge is assigned as  (  ⇒   ) =  ( 
⇐   ) = 1/4 and  ( 
⇎   ) = 1/2.</p>
        <p>Compared to the uniform prior, the marginal uniform prior is less prone to false-positive edges
in the posterior-probability-maximizing graph. The Bayesian variable selection prior (VSP)
assigns a probability of inclusion of possible parent nodes, with the default being 1/ .</p>
        <p>
          The heart of Kg2Causal is the use of an informative prior based on a general-purpose
knowledge graph; for this purpose we use an edge decomposition technique described by Castelo
and Siebes [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ]. For any pair of vertices (  ,
  ) for which an edge  
with probability 1/4 for   ⇒V and probability 1/4 for
        </p>
        <p>general-purpose knowledge graph, we assign a prior probability (
⇒V exists in the
= 1/2) on those edges,
⇎V , since the later two are
al
ternate edges that have no corresponding edge in the general knowledge graph we use the
uniform probability distribution as shown in Fig. 2. In this way we can create a complete
prior probability (from partial knowledge) over the network  ; on log scale, we define  ( ) as
log  ( ) = ∑  ⇔  ∈,  ≠ log  (  ⇔   ) + ∑  ⋯  ∈,  ≠ log  (  ⇎   )
.</p>
      </sec>
      <sec id="sec-2-5">
        <title>2.5. Knowledge Graphs</title>
        <p>
          A “knowledge graph” [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ] is a multigraph consisting of
nodes and edges (labeled by relationship type or
description of instance attributes) between them. Although most
relationships in knowledge graphs are between entities
and context-based associations, these do not always imply
causal relationship. Nevertheless, such links are strong
association that can strengthen causal relationships that we
seek to discover. The key idea of Kg2Causal is to use links Figure 2: Complete prior by edge
defrom large knowledge graphs as generalised prior informa- composition technique
tion to aid in data-driven network learning in highly
specific application contexts. For this work, we leveraged a general biomedical knowledge graph
that we and collaborators (see Acknowledgments) had constructed, KG1 3. KG1 has 130,443
nodes, 3.5M edges, 11 node semantic types, and 17 edge relation types, and was compiled from
20 diferent biomedical knowledge-bases (Monarch, COHD, ChEMBL, DGIdb, DisGeNet,
Disease Ontology, GeneProf, HMDB, KEGG, miRBase, miRGate, mychem.info, mygene.info, NCBI
Gene, OMIM, Pathway Commons, Pharos, PubChem, Reactome, and UniprotKB). We hosted
KG1 in a Neo4j database (ver. 3.5.13) and used the Cypher query language to search for concept
mappings between ground-truth network variables and concept nodes in the KG1 knowledge
graph, and for edge connections between mapped concepts within KG1.
        </p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Our Approach</title>
      <p>We developed KG2Causal to leverage a general-purpose biomedical knowledge graph (see
Sec. 2.5) in order to improve context-specific, data-driven network learning from multivariate
observations; such observations could consist of gene expression measurements, proteomics
measurements, or electronic health records. The key ideas of our approach are (i) mapping
each variable in the dataset to a node in the knowledge graph, and querying relationships
between them; (ii) extracting a subgraph containing the connected variables with edges between
them; and (iii) use this edge set as our prior knowledge to guide the optimizing scoring step
for inferring causal network. Mathematically, given a dataset  , with a set  of observable
variables and given a general-purpose prior knowledge graph Γ as a multigraph, we want to
learn a causal graph   = ( ,  ) that approximately maximizes the posterior probability, i.e.,
argmax ( ( ∣ , Γ)), given a prior  ( ∣ Γ). As a comparison, we used three
uninformative prior distributions, namely uniform, marginal and Bayesian variable selection priors with
each dataset in order to understand whether or not—and to what extent—using an
informative network prior improves accuracy of causal network learning in a biomedical context. The
Kg2Causal network discovery workflow, illustrated in Figure 3, consists of the following steps:
• Map variables  to nodes in Γ, and extract a list  of edges from Γ among the nodes
(collapsing same-direction multiedges to single edges).</p>
      <p>3https://github.com/RTXteam/RTX/code/reasoningtool/kg-construction
• Generate 100 random DAGs with nodes  . We empirically determined, based on our
previous study [21], that this number is adequate for the medium-to-large datasets 4.
• In the score function, we include edge probability contributions from the prior
knowledge graph (we assign probability 0.5 for every edge in  ). For each DAG, we used the
stochastic algorithm Tabu [14] to find a DAG that maximizes standard Bayesian Dirichlet
equivalent uniform scoring function (BDeu) [15, 16].
• The previous step yields 100 optimized networks. Using these we compute the
probability of each possible directed edge as its empirical frequency of occurrence among the
DAGs. For example, if an edge ( ,  ) appears in 80 out of 100 optimized DAGs, we assign
it an empirical probability of 0.80. We store the edge probabilities in a list.
• We threshold the edge probabilities in order to obtain the set of edges  for   . Based
on empirical studies, we chose a threshold of 0.85.</p>
      <p>We chose Tabu for its robustness, simplicity (uses few parameters) and history-dependent
(“memory”), although Kg2Causal is in principle compatible with any optimizing method.</p>
      <sec id="sec-3-1">
        <title>3.1. Observational experiment</title>
        <p>In the case where the dataset  is purely observational (i.e., no interventions) from a single
experiment, Kg2Causal can be implemented algorithmically as described above; we provide a
pseudocode description of the “observational” formulation of Kg2Causal in Algorithm 1.</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Mix of Observational and Interventional experiments</title>
        <p>With causal network learning based on a single observational dataset, it is dificult to
diferentiate between compatible Markov equivalent models [22]. In the simple case of three variables
  ,   and   , there are three possible causal models   ⇒   ⇒   ,   ⇐   ⇐   , and
  ⇐   ⇒   ; all three structures are Markov equivalent. This ambiguity can be resolved
by incorporating measurements from interventional experiments, causing the Markov
equivalent structures to have diferent likelihoods. However, in real-world settings, it is dificult
to obtain such interventional measurements as compared to observational measurement [23].
Even when interventional datasets are available, learning a causal network from mixed
observational and interventional data is challenging, for two reasons: (i) datasets collected from
diferent experiments under diferent environmental conditions or batches are not identically
distributed, in which case their underlying causal structures may difer leading to errors if
network inference is applied to the combined set of measurements; and (ii) in real-world settings
interventions are not “perfect” but rather “uncertain” (i.e., “imperfect” or “fat-hand”),
meaning that the interventions have other unknown targets, which if ignored would likely yield
spurious interactions in network discovery. To deal with such cases, based on our previous
study demonstrating the efectiveness of the Learn and Vote algorithm [21, 24], we extended
Kg2Causal to include learning from a multi-experiment dataset using a voting-based
integration method where experiment-specific causal networks are learned and combined by weighted
averaging into a consensus causal network. The additional steps in Algorithm 2 are as follows:
1. Let there be  experiments (can be observational and/or interventional) that produced 
datasets with observed variables as ( ) and known intervention targets as    , if any.
2. Repeat steps 1-4 (from Sec. 3) for all  experiments.
3. From the  arc-weight lists, average arc strengths and directions over all the 
experiments in which the given arc is not intervened.</p>
        <p>4. Per our earlier work [21], we used a threshold of 0.5 for the average arc probability.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Analysis and Results</title>
      <p>In this section, we describe the observational datasets and ground-truth networks (Sec. 4.1)
and the simulated mixed interventional-observational datasets (Sec. 4.2) that we analyzed. We
present (Sec. 4.3) the results of empirical studies of network learning performance of Kg2Causal
on these datasets in comparison to other types of network structure priors.</p>
      <sec id="sec-4-1">
        <title>4.1. Observational datasets that were analyzed</title>
        <p>To assess performance of Kg2Causal on biological network inference problems, we empirically
analyzed five real world datasets for which published ground-truth networks were available:</p>
        <p>Hepatic encephalopathy: This is a clinical study about a serious liver complication called
hepatic encephalopathy (HE) [25] with conditions like electrolyte disorders, infections, poor
spirits. It is a categorical dataset with eight nodes and ground-truth containing ten edges.</p>
        <p>Sachs et al. T cell signaling: This is a study on mixed observational and interventional
experiments to infer causal connections between eleven protein and phospholipids in the
intracellular signaling network of individual human CD4+ T-cells [26]. The dataset contains
measurement of gene expressions with ground truth network containing twenty edges.</p>
        <p>Hematopoietic Stem Cell Diferentiation (HSC): This is a real-world gene regulatory
network to study underlying myeloid diferentiation from multipotent myeloid progenitors to
megakaryocytes, erythrocytes, granulocytes and monocytes [27] in mammals [28]. The dataset
contains measurement of gene expressions with ground-truth network having thirty edges.</p>
        <p>Gonadal Sex Determination (GSD): This a real-world model which represents the
gonadal diferentiation circuit which monitors the transformation of the bipotential gonadal
primordium (BGP) into either female or male gonads [29]. The network consists of eighteen genes
and one node for the urogenital ridge. The dataset contains measurement of gene expressions
with ground-truth network containing seventy nine edges [28].</p>
        <p>Yeast cell cycle: This is a dataset derived from a network model of thirty genes participating
in cell-cycle regulation of yeast [30]. The dataset was created by integrating gene expression
data with transitive protein-protein interaction. The ground-truth network has 317 edges.</p>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. Mixed observational-interventional datasets</title>
        <p>We tested Kg2Causal using Sachs et al. interventional dataset and simulated observational and
interventional measurement data from synthetic networks using the bnlearn package. For
observational data, we drew random samples and for interventional data, we set some target
nodes in the network to fixed values in order to create mutilated networks [31] before drawing
samples from them. To simulate an uncertain intervention (or “fat-hand”) [32] we intervened
one or more of child nodes of the intervention’s target node.</p>
        <p>Cancer: This is a synthetic network [33] on causes and consequences of lung cancer. We
simulated data from one observational and one interventional experiment with equal
number of samples (500) from each experiment to avoid bias. For interventional experiment we
generated a mutilated network: cancer_mut with one intervention (node Smoker).</p>
        <p>Asia: This is a synthetic network [34], about occurrence of lung disease and their
epidemiological connection a prior visit to Asia. We simulated one observational and two interventional
ROC curve</p>
        <p>PR curve</p>
        <p>Accuracy
HE
Sachs
HSC
GSD</p>
        <p>Yeast
experiment from the synthetic network with equal number of samples (500) from each
experiment to avoid bias. For the interventional experiments we conducted experiments to generate
two mutilated networks: asia_mut1 with one intervention (node “Lung Cancer”) and asia_mut2
with two intervention (at nodes “Lung Cancer” and “Tuberculosis”).</p>
      </sec>
      <sec id="sec-4-3">
        <title>4.3. Analysis of results</title>
        <p>In this section we present results of empirical studies of network learning performance on the
ifve observational datasets (see Sec. 4.1) and three mixed observational-interventional datasets
(see Sec. 4.2), for Kg2Causal in comparison with three other types of network structure priors.
To quantify the performance, we considered presence of an edge in the ground truth network
as a “true positive” and absence of an edge as a “true negative” causal arc. For the observational
datasets, we used Algorithm 1 with indicated prior (KG, uniform, marginal, or Bayesian VSP)
as described in Sec. 3. For mixed interventional-observational datasets, we used Algorithm 2
with the indicated prior. For each dataset, we found (Fig. 5) that using general knowledge graph
as prior improves performance, by ROC, precision/recall, F1, and accuracy. Quantitatively,
Kg2Causal had higher area under ROC curve (AUROC) and area under precision-recall curve
(AUPR) scores than network learning with three non-KG priors tested, for the five
observational (Table 1) and three mixed interventional-observational (Table 2) datasets. Moreover, the
results of comparative analysis of Kg2Causal performance on mixed datasets (Table 2) show
efect of pooling data from diferent experiments (Algorithm 1) as compared to voting
(Algorithm 2) for such cases: pooling is better for small network (Cancer) (consistent with our
previous findings [21]), whereas voting is better for medium-sized networks (Asia and Sachs).</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Discussion and Conclusion</title>
      <p>
        A limitation of this study is that due to lack of availability large ground-truth causal networks,
all datasets analyzed in this work are for small to medium sized networks (8-30 nodes); due to
scalability issue of score-based methods, Kg2Causal method as described here would be
challenging to apply to larger networks (many hundreds to thousands of nodes and beyond), which
is an area of future work. Further, we plan to explore ways to incorporate a network structure
prior in constraint based algorithms (for example, PC algorithm [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]), given (in general) more
favorable scalability of constraint-based algorithms and given the overwhelming
preponderance of observational-only datasets that are available. We also want to evaluate alternative
methods (other than the method [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] that we are using) for incorporating priors and compare
them. Present work clearly demonstrates, for the case of causal network learning from small- to
medium-sized biomedical or biological datasets, the importance of aggregating and leveraging
structured prior knowledge in order to maximize network learning accuracy.
      </p>
    </sec>
    <sec id="sec-6">
      <title>6. Acknowledgments</title>
      <p>This work was supported in part by the National Center for Advancing Translational Sciences
(NCATS) through the Biomedical Data Translator program (OT2TR002520 &amp; OT2TR003428 to
SAR). We thank David Koslicki, Eric Deutsch, Yao Yao, Zheng Liu, Deqing Qu, Finn Womack,
and Ujjval Kumaria for their work on constructing the KG1 knowledge graph.
[14] F. Glover, Future paths for integer programming and links to artificial intelligence,
Computers &amp; operations research 13 (1986) 533–549.
[15] D. Heckerman, D. Geiger, D. M. Chickering, Learning Bayesian networks: The
combination of knowledge and statistical data, Machine Learning 20 (1995) 197–243.
[16] G. F. Cooper, C. Yoo, Causal discovery from a mixture of experimental and observational
data, in: Proceedings of the Fifteenth conference on Uncertainty in artificial intelligence,
Morgan Kaufmann Publishers Inc., 1999, pp. 116–125.
[17] M. Scutari, Learning Bayesian networks with bnlearn, arXiv:0908.3817 (2009).
[18] M. J. Druzdzel, The role of assumptions in causal discovery (2009).
[19] A. Hauser, P. Bühlmann, Characterization and greedy learning of interventional markov
equivalence classes of directed acyclic graphs, J Mach Learn Res 13 (2012) 2409–2464.
[20] M. Scutari, On the prior and posterior distributions used in graphical modelling, Bayesian</p>
      <p>Analysis 8 (2013) 505–532.
[21] M. Sinha, P. Tadepalli, S. A. Ramsey, Voting-based integration algorithm improves causal
network learning from interventional and observational data: an application to cell
signaling network inference, Plos one 16 (2021) e0245776.
[22] D. Koller, N. Friedman, F. Bach, Probabilistic graphical models: principles and techniques,</p>
      <p>MIT press, 2009.
[23] Y. Hagmayer, S. A. Sloman, D. A. Lagnado, M. R. Waldmann, Causal reasoning through
intervention, Causal learning: Psychology, philosophy, and computation (2007) 86–100.
[24] M. Sinha, Causal structure learning from experiments and observations (2019).
[25] Z. Zhang, J. Zhang, Z. Wei, H. Ren, W. Song, et al., Application of tabu search-based
Bayesian networks in exploring related factors of liver cirrhosis complicated with hepatic
encephalopathy and disease identification, Scientific Reports 9 (2019) 1–8.
[26] K. Sachs, O. Perez, D. Pe’er, D. A. Laufenburger, G. P. Nolan, Causal protein-signaling
networks derived from multiparameter single-cell data, Science 308 (2005) 523–529.
[27] J. Krumsiek, C. Marr, T. Schroeder, F. J. Theis, Hierarchical diferentiation of myeloid
progenitors is encoded in the transcription factor network, PLOS ONE 6 (2011).
[28] A. Pratapa, A. P. Jalihal, J. N. Law, A. Bharadwaj, T. Murali, Benchmarking algorithms for
gene regulatory network inference from single-cell transcriptomic data, Nature Methods
17 (2020) 147–154.
[29] O. Ríos, S. Frias, A. Rodríguez, S. Kofman, Merchant, et al., A boolean network model of
human gonadal sex determination, Theor Biol Medic Model 12 (2015) 26.
[30] W. Liu, J. C. Rajapakse, Fusing gene expressions and transitive protein-protein
interactions for inference of gene regulatory networks, BMC Systems Biology 13 (2019) 37.
[31] J. Pearl, Graphical models for probabilistic and causal reasoning, in: Quantified
representation of uncertainty and imprecision, Springer, 1998, pp. 367–389.
[32] D. Eaton, K. Murphy, Exact Bayesian structure learning from uncertain interventions, in:</p>
      <p>Artificial Intelligence and Statistics, 2007, pp. 107–114.
[33] K. B. Korb, A. E. Nicholson, Bayesian artificial intelligence, CRC Press, 2010.
[34] S. L. Lauritzen, D. J. Spiegelhalter, Local computations with probabilities on graphical
structures and their application to expert systems, J Roy Stat Soc B (1988) 157–224.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>J.</given-names>
            <surname>Pearl</surname>
          </string-name>
          ,
          <article-title>Causality: models, reasoning, and inference</article-title>
          ,
          <source>Econometric Theory</source>
          <volume>19</volume>
          (
          <year>2003</year>
          )
          <fpage>46</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>P.</given-names>
            <surname>Spirtes</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Glymour</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Scheines</surname>
          </string-name>
          , Causation, prediction, and search.
          <source>Adaptive computation and machine learning</source>
          , MIT Press, Cambridge, MA,
          <year>2000</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>B.</given-names>
            <surname>Chakraborty</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Sinha</surname>
          </string-name>
          ,
          <article-title>Student evaluation model using bayesian network in an intelligent e-learning system</article-title>
          ,
          <source>Journal of Institute of Integrative Omics and Applied Biotechnology (IIOAB) 7</source>
          (
          <year>2016</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>D.</given-names>
            <surname>Chatterjee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Sinha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Sinha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. K.</given-names>
            <surname>Saha</surname>
          </string-name>
          ,
          <article-title>A probabilistic approach for detection and analysis of cognitive flow</article-title>
          .,
          <source>in: BMA@ UAI</source>
          ,
          <year>2016</year>
          , pp.
          <fpage>44</fpage>
          -
          <lpage>53</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>D.</given-names>
            <surname>Chatterjee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Sinha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Sinha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. K.</given-names>
            <surname>Saha</surname>
          </string-name>
          ,
          <article-title>Method and system for detection and analysis of cognitive flow</article-title>
          ,
          <year>2020</year>
          . US Patent
          <volume>10</volume>
          ,
          <issue>722</issue>
          ,
          <fpage>164</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>D. M.</given-names>
            <surname>Chickering</surname>
          </string-name>
          ,
          <article-title>Learning equivalence classes of Bayesian-network structures</article-title>
          ,
          <source>J Mach Learn Res</source>
          <volume>2</volume>
          (
          <year>2002</year>
          )
          <fpage>445</fpage>
          -
          <lpage>498</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>P.</given-names>
            <surname>Giudici</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Castelo</surname>
          </string-name>
          ,
          <article-title>Improving Markov chain Monte Carlo model search for data mining</article-title>
          ,
          <source>Machine Learning</source>
          <volume>50</volume>
          (
          <year>2003</year>
          )
          <fpage>127</fpage>
          -
          <lpage>158</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>N.</given-names>
            <surname>Friedman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Koller</surname>
          </string-name>
          ,
          <article-title>Being Bayesian about network structure. A Bayesian approach to structure discovery in Bayesian networks</article-title>
          ,
          <source>Machine learning 50</source>
          (
          <year>2003</year>
          )
          <fpage>95</fpage>
          -
          <lpage>125</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>B.</given-names>
            <surname>Jones</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Carvalho</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Dobra</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Hans</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Carter</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>West</surname>
          </string-name>
          ,
          <article-title>Experiments in stochastic computation for high-dimensional graphical models, Statistical Science (</article-title>
          <year>2005</year>
          )
          <fpage>388</fpage>
          -
          <lpage>400</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>S.</given-names>
            <surname>Mukherjee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T. P.</given-names>
            <surname>Speed</surname>
          </string-name>
          ,
          <article-title>Network inference using informative priors</article-title>
          ,
          <source>Proc Nat Acad Sci USA</source>
          <volume>105</volume>
          (
          <year>2008</year>
          )
          <fpage>14313</fpage>
          -
          <lpage>14318</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>R.</given-names>
            <surname>Castelo</surname>
          </string-name>
          ,
          <string-name>
            <surname>A. Siebes,</surname>
          </string-name>
          <article-title>Priors on network structures. biasing the search for Bayesian networks</article-title>
          ,
          <source>Int J Approx Reason</source>
          <volume>24</volume>
          (
          <year>2000</year>
          )
          <fpage>39</fpage>
          -
          <lpage>57</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>S.</given-names>
            <surname>Ji</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Pan</surname>
          </string-name>
          , E. Cambria,
          <string-name>
            <given-names>P.</given-names>
            <surname>Marttinen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. S.</given-names>
            <surname>Yu</surname>
          </string-name>
          ,
          <article-title>A survey on knowledge graphs: Representation, acquisition and applications</article-title>
          , arXiv preprint arXiv:
          <year>2002</year>
          .
          <volume>00388</volume>
          (
          <year>2020</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>L.</given-names>
            <surname>Ehrlinger</surname>
          </string-name>
          , W. Wöß,
          <article-title>Towards a definition of knowledge graphs</article-title>
          .,
          <source>SEMANTiCS</source>
          (Posters, Demos, SuCCESS)
          <volume>48</volume>
          (
          <year>2016</year>
          )
          <fpage>1</fpage>
          -
          <lpage>4</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>