<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Scalable Probabilistic Routes⋆</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Suwei Yang</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Victor C. Liang</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Kuldeep S. Meel</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Grabtaxi Holdings</institution>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>National University of</institution>
          <country country="SG">Singapore</country>
        </aff>
      </contrib-group>
      <fpage>64</fpage>
      <lpage>74</lpage>
      <abstract>
        <p>Inference and prediction of routes have become of interest over the past decade owing to a dramatic increase in package delivery and ride-sharing services. Given the underlying combinatorial structure and the incorporation of probabilities, route prediction involves techniques from both formal methods and machine learning. One promising approach for predicting routes uses decision diagrams that are augmented with probability values. However, the effectiveness of this approach depends on the size of the compiled decision diagrams. The scalability of the approach is limited owing to its empirical runtime and space complexity. In this work1, our contributions are two-fold: first, we introduce a relaxed encoding that uses a linear number of variables with respect to the number of vertices in a road network graph to significantly reduce the size of resultant decision diagrams. Secondly, instead of a stepwise sampling procedure, we propose a single pass sampling-based route prediction. In our evaluations arising from a real-world road network, we demonstrate that the resulting system achieves around twice the quality of suggested routes while being an order of magnitude faster compared to state-of-the-art.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Routing</kwd>
        <kwd>Decision diagram</kwd>
        <kwd>Knowledge Compilation</kwd>
        <kwd>Sampling</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        emerged as the state-of-the-art approach for probability
guided routing. The decision diagram based approach
The past decade has witnessed an unprecedented rise of allows for learning of SPDs through the use of decision
the service economy, best highlighted by the prevalence of diagrams augmented with probability values, followed by
delivery and ride-sharing services [
        <xref ref-type="bibr" rid="ref1 ref2">1, 2</xref>
        ]. For operational a stepwise process for uncovering the route node by node.
and financial efcfiiency, a fundamental problem for such However, scalability remains a challenge when using
companies is the inference and prediction of routes taken decision diagrams to reason about route distributions,
parby the drivers. When a driver receives a job to navigate ticularly for large road networks. Existing works address
from location A to B, the ride-sharing app needs to predict this concern in various ways, such as through the use of
the route in order to determine: (1) the trip time, which hierarchical diagrams [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] and Structured Bayesian
Netis an important consideration for the customer, (2) the works [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. Choi et al. [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] partition the structured space into
fare, important consideration for both the driver and the smaller subspaces, with each subspace’s SPD being
reprecustomer, and (3) the trip experience since customers sented by a decision diagram. Shen et al. used Structured
feel safe when the driver takes the route described in Bayesian Networks to represent conditional dependencies
their app [
        <xref ref-type="bibr" rid="ref3 ref4">3, 4</xref>
        ]. However the reality is that drivers and between sets of random variables, with the distribution
customers have preferences, as such the trips taken are within each set of variables represented by a conditional
not always the shortest possible by distance or time [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. Probabilistic Sentential Decision Diagram (PSDD) [
        <xref ref-type="bibr" rid="ref10 ref9">9, 10</xref>
        ].
To this end, delivery and ride-sharing service companies Despite these efforts, the scalability of decision diagrams
have a need for techniques that can infer the distribution for routing, in terms of space complexity, remains an open
of routes and efficiently predict the likely route a driver issue [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ].
takes for a given start and end location. The primary contribution of this work is to tackle the
      </p>
      <p>
        Routing, a classic problem in computer science, has scalability challenges faced by the current
state-of-thetraditionally been approached without considering the art approaches. Our contributions are two-fold: first, we
learning of distributions [
        <xref ref-type="bibr" rid="ref6 ref7">6, 7</xref>
        ]. However, Choi, Shen, and focus on minimizing the size of the compiled diagram
Darwiche demonstrated through a series of papers that the by relaxation and refinement . In particular, instead of
distribution of routes can be conceptualized as a structured learning distributions over the set of all valid routes, we
probability distribution (SPD) given the underlying com- learn distributions over an over-approximation, perform
binatorial structure [
        <xref ref-type="bibr" rid="ref10 ref8 ref9">8, 9, 10</xref>
        ]. Decision diagrams, which sampling followed by refinement to output a valid route.
are particularly well-suited for representing SPDs, have Secondly, instead of a stepwise sampling procedure, we
perform one-pass sampling by adapting existing sampling
ENIGMA-23, September 03–04, 2023, Rhodes, Greece algorithm [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ] to perform conditional sampling. Our
ex⋆ This is an existing work published in LPAR-24. tensive empirical evaluation over benchmarks arising from
* Corresponding author.
      </p>
      <p>© 2023 Copyright for this paper by its authors. Use permitted under Creative Commons License real-world publicly available road network data
demonCPWrEooUrckReshdoinpgs IhStpN:/c1e6u1r3-w-0s.o7r3g ACttEribUutiRon 4W.0Iontrekrnsathioonapl(CPCrBoYc4e.0e)d.ings (CEUR-WS.org) strates that our approach, called ProbRoute, is able to
handle real-world instances that were clearly beyond the
reach of the state-of-the-art. Furthermore, on instances
that can be handled by prior state-of-the-art, ProbRoute
achieves a median of 10× runtime performance
improvements.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Background</title>
      <sec id="sec-2-1">
        <title>In the remaining parts of this work we will discuss how to</title>
        <p>encode simple, more specifically simple trips, in a graph
using Boolean formulas. In addition, we will also discuss
decision diagrams and probabilistic reasoning with them.
In this section, we introduce the preliminaries and
background for the rest of the paper. To avoid ambiguity, we
use vertices to refer to nodes of road network graphs and
nodes to refer to nodes of decision diagrams.</p>
        <sec id="sec-2-1-1">
          <title>2.1. Preliminaries</title>
          <p>
            Boolean Formula A Boolean variable can take values
true or false. A literal  is a Boolean variable or its
negation. Let  be a Boolean formula.  is in conjunctive
normal form (CNF) if  is a conjunction of clauses, where
each clause is a disjunction of literals.  is satisfiable if
there exists an assignment  of the set of variables 
of  such that  evaluates to true. We refer to  as a
satisfying assignment of  and denote the set of all  as
Sol( ). In the remaining parts of this paper, all formulas
and variables are Boolean unless stated otherwise.
Decision Diagrams Decision diagrams are directed
acyclic graph (DAG) representations of logical formulas
under the knowledge compilation paradigm. Decision
diagrams are designed to enable the tractable handling
of certain types of queries, that is the queries can be
answered in polynomial time with respect to the number of
nodes in the diagram [
            <xref ref-type="bibr" rid="ref13">13</xref>
            ]. We use diagram size to refer to
the number of nodes in a decision diagram. In this work
we use the OBDD[∧] [
            <xref ref-type="bibr" rid="ref14">14</xref>
            ] variant of OBDD, more
specifically Probabilistic OBDD[∧] (PROB) [
            <xref ref-type="bibr" rid="ref12">12</xref>
            ], for which
there are existing efficient sampling algorithm. We will
discuss the PROB decision diagram in later sections.
          </p>
        </sec>
        <sec id="sec-2-1-2">
          <title>2.2. Related Works</title>
          <p>Simple Trip Let  be an arbitrary undirected graph,
a path on  is a sequence of connected vertices
1, 2, ...,  of  where ∀=− 11+1 ∈  (), with
 () referring to neighbours of . A path  does not
contain loops if ∀, ∈  ̸=  .  does not contain
detour if ∀, ,,∈  ̸∈  () ∨  ̸∈  () ∨  ̸∈
 (). Path  is a simple path if it does not contain
loops. A simple path  is a simple trip if it does not
contain detours. We denote the set of all simple trips in
 as SimpleTrip(). In Figure 1, d-e-h is a simple trip
whereas d-e-f-c-b-e-h and d-e-f-i-h are not because they
contain a loop and a detour respectively. We use Term( )
to refer to the terminal vertices of path  .
able to represent SPDs as weighted sums and mixtures of Notations We use nodes to refer to nodes in PROB 
gaussian distributions. Einsum networks address scalabil- and vertices to refer to nodes in graph (, ) to avoid
ity by utilizing tensor operations implemented in existing ambiguity. Child() refers to the children of node .
deep learning frameworks such as PyTorch [22]. Our work Hi() refers to the hi child of decision node  and Lo()
differs from the Einsum network structure, we require the refers to the lo child of .  Hi() and  Lo() refer to the
determinism property for decision diagrams whereas the parameters associated with the edge connecting decision
underlying structure for Einsum network lacks this prop- node  with Hi() and Lo() respectively in a PROB.
erty. We will introduce the determinism property in the Var() refers to the associated variable of decision node .
following section. VarSet() refers to the set of variables of  represented</p>
          <p>Various Boolean encodings have been proposed for rep- by a PROB with  as the root node. Subdiagram()
resenting paths within a graph, including Absolute, Com- refers to the PROB starting at node . Parent() refers
pact, and Relative encodings [23]. These encodings cap- to the immediate parent nodes of  in PROB.
ture both the vertices comprising the path and the ordering
information of said vertices. However, these encodings PROB Structure Let  be a PROB which represents
necessitate the use of polynomial number of Boolean vari- a Boolean formula  .  is a DAG comprising of four
ables, specifically | |2, | |2| |, and 2| |2 variables types of nodes - conjunction node, decision node, true
for Absolute, Compact, and Relative encoding respec- node, and false node.
tively. While these encodings accurately represent the A conjunction node (or ∧-node)  splits Boolean
forspace of paths within a graph, they are not efcfiient and mula  into different sub-formulas, i.e.  = 1 ∧ 2 ∧
lead to high space and time complexity for downstream ... ∧  . Each sub-formula is represented by a PROB
routing tasks. rooted at the corresponding child node of , such that</p>
          <p>
            Choi, Shen, and Darwiche demonstrated over a series the set of variables in each of 1, 2, ...,  are mutually
of papers that the distribution of routes can be concep- disjoint.
tualized as a structured probability distribution (SPD) A decision node  corresponds to a Boolean variable
given the underlying combinatorial structure [
            <xref ref-type="bibr" rid="ref10 ref8 ref9">8, 9, 10</xref>
            ].  and has exactly two child nodes, Hi() and Lo().
This approach, referred to as the ‘CSD’ approach in the Hi() represents the decision to set  to true and Lo()
rest of this paper, builds on top of existing approaches represents otherwise. We use Var() to refer to the
that represents paths using zero-surpressed decision dia- Boolean variable  that decision node  is associated
grams [24, 25, 26]. The CSD approach utilizes sentential with in  . Each branch of  has an associated parameter,
decision diagrams to represent the SPD of paths and em- and the branch parameters of  sum up to 1.
ploys a stepwise methodology for handling path queries. The leaf nodes of PROB  are true and false nodes.
Specifically, at each step, the next vertex to visit is deter- An assignment  of Boolean formula  is a traversal of
mined to be the one with the highest probability, given the the PROB from the root node to the leaf node, we denote
vertices already visited and the start and destination ver- such a traversal as Rep ( ). At each decision node ,
tices. While the CSD approach has been influential in its the traversal follows the value of variable Var() in  .
incorporation of probabilistic elements in addressing the At each conjunction node, all child branches are traversed.
routing problem, it is not without limitations. In particular, A satisfying assignment of  will result in a traversal from
there are two main limitations (1) there are no guarantees root to leaf nodes where only the true nodes are visited.
of completion, meaning that even if a path exists between If a traversal leads to a false node at any point, then the
a given start and destination vertex, it may not be returned assignment is not a satisfying assignment.
using the CSD approach [
            <xref ref-type="bibr" rid="ref8">8</xref>
            ]. (2) the stepwise routing
process necessitates the repeated computation of conditional 1
probabilities, resulting in runtime inefficiencies.
          </p>
          <p>In summary, the limitations of prior works are Boolean  Lo(1)  Hi(1)
reonuctoidnigngtasstkhcaotmrepqlueitrieona hgiugahrannutmeebse,ranodf vnaurmiaebrleosu,slaccokndoif- 2   3
tional probability computations.
 Lo(2)  Hi(2)
 Hi(3)  Lo(3)
2.3. PROB: Probabilistic OBDD[∧]
4
⊥
⊤
5</p>
        </sec>
      </sec>
      <sec id="sec-2-2">
        <title>In this subsection, we will introduce the PROB (Proba</title>
        <p>
          bilistic OBDD[∧]) decision diagram structure and proper- Figure 2: A PROB  1 representing  = (∨)∧(¬∨¬)
ties. We adopt the same notations as prior work [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ] for
consistency.
        </p>
        <p>An assignment of Boolean formula  is represented by
a top-down traversal of a PROB compiled from  . For from data. Finally to sample trips from start vertex 
example, we have a Boolean formula  = (∨)∧(¬∨ to destination vertex , we create a partial assignment
¬), represented by the PROB  1 in Figure 2. When   ′ with the variables that indicate  and  are terminal
is assigned true and  is assigned false,  will evaluate to vertices set to true. The ProbSample algorithm,
algotrue. If we have a partial assignment  , we can perform rithm 2, takes  ′ as input and samples a set of satisfying
inference conditioned on  if we visit only the branches assignments. Finally, in the refinement step, a simple trip
of decision nodes in  that agree with  . This allows for  is extracted from each satisfying assignment  using
conditional sampling, which we discuss in Section 3. depth-first search to remove disjoint loop components.</p>
        <p>PROB inherits important properties of OBDD[∧] that
are useful to our algorithms in later sections. The proper- 3.1. Relaxed Encoding
ties are - determinism, decomposability, and smoothness.</p>
      </sec>
      <sec id="sec-2-3">
        <title>In this work, we present a novel relaxed encoding that</title>
        <p>Property 1 (Determinism). For every decision node , only includes vertex membership and terminal
informathe set of satisfying assignments represented by Hi() tion. Our encoding only requires a linear (2| |) number
and Lo() are logically disjoint of Boolean variables, resulting in more succinct decision
Property 2 (Decomposability). For every conjunction diagrams and improved runtime performance for
downnode , VarSet() ∩ VarSet( ) = ∅, ∀,  ∈ stream tasks. In relation to prior encodings, we observed
Child(),  ̸=  that the ordering information of vertices can be inferred
from the graph given a set of vertices and the terminal
verProperty 3 (Smoothness). For every decision node , tices, thus enabling us to exclude ordering information in
VarSet(Hi()) = VarSet(Lo()) our relaxed encoding. Our relaxed encoding represents an
over-approximation of trips in SimpleTrip() for graph</p>
        <p>
          In the rest of this paper, all mentions of PROB refer (, ) using a linear number of Boolean variables with
to smooth PROB. Smoothness can be achieved via a respect to | |. We discuss the over-approximation in later
smoothing algorithm introduced in prior work [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ]. We parts of this section.
defer the smoothing algorithm to the appendix. Our encoding uses two types of Boolean variables,
type and -type variables. Each vertex  ∈  in graph
3. Approach (, ) has a corresponding -type and -type variable.
The -type variable indicates if vertex  is part of a trip
In this section, we introduce our approach, ProbRoute, and -type variable indicates if  is a terminal vertex of
which addresses the aforementioned limitations of ex- the trip. Our encoding is the conjunction of the vfie types
isting methods using (1) a novel relaxed encoding that of clauses over all vertices in graph  as follows.
requires a linear number of Boolean variables and (2) a
single-pass sampling and refinement approach which pro- ⋁︁  (H1)
vides completeness guarantees. The flow of ProbRoute
is shown in Figure 3. ∈
Graph
Data
Query
        </p>
        <p>Encode
(Sec 3.1)</p>
        <p>Compile into</p>
        <p>OBDD[∧]
Learn parameters</p>
        <p>(Sec 3.2)
Sample and refinement
(Sec 3.3)</p>
        <p>Sampled Trip
⋀︁ [ →−
∈</p>
        <p>⋀︁
,,∈,
̸≠=
⋀︁  →−
∈</p>
        <p>⋀︁
∨ [(
,∈,∈()</p>
        <p>⋁︁
∈(),
̸≠=
 ∧</p>
        <p>⋀︁
,∈(),̸=
[ ∧  →−</p>
        <p>(¬ ∨ ¬)
) ∧
(¬ ∨ ¬)]]</p>
        <p>(H5)
⋀︁
,∈(),
,̸=
(H2)
(H3)
(H4)</p>
      </sec>
      <sec id="sec-2-4">
        <title>In our approach, we first use our relaxed encoding to en</title>
        <p>code SimpleTrip() of graph  into a Boolean formula. A simple trip  in graph  has at least one terminal
Next, we compile the Boolean formula into OBDD[∧]. vertex and at most two terminal vertices, described by
In order to learn from historical trip data, we convert the encoding components H1 and H3 respectively. At each
data into assignments. Subsequently, the OBDD[∧] is pa- terminal vertex  of  , there can only be at most 1
adjarameterized into PROB  and the parameters are learned cent vertex of  that is also part of  and this is encoded
Definition 1. Let ℳ : SimpleTrip() ↦→ Sol( ) such
that for a given trip  ∈ SimpleTrip(),  = ℳ( ) is
the assignment whereby the -type variables of all vertices
 ∈  and the -type variables of  ∈ Term( ) are set to
true. All other variables are set to false in  .
by H4. For each vertex  in  , at least one of their adja- each assignment  in the dataset, we perform a top-down
cent vertices is in  regardless if  is a terminal vertex traversal of  following Algorithm 1. In the traversal, we
or otherwise, this is captured by H2. Finally, H5 encodes visit all child branches of conjunction nodes (line 4) and
that if a given vertex  and one of its adjacent vertices the child branch of decision node  corresponding to the
are part of  , then either another neighbour vertex of  is assignment of Var() in  (lines 6 to 12), and increment
part of  or  is a terminal vertex. the corresponding counters in the process. Subsequently,
the branch parameters for node  are updated according
to the following formulas.</p>
        <p>Hi() =</p>
        <p>Hi#() + 1
Hi#() + Lo#() + 2</p>
        <p>We refer to our encoding as relaxed encoding because  Lo() =
the solution space of constraints over-approximates the
space of simple trips in the graph. Notice that while all While we add 1 to numerator and 2 to denominator as
simple trips correspond to a satisfying assignment of the a form of Laplace smoothing [27], other forms of
smoothencoding, they are not the only satisfying assignments. ing to account for division by zero is possible. Notice
Assignments corresponding to a simple trip  with dis- that the learnt branch parameters of node  are in fact
joint loop component  also satisfy the constraints. The approximations of conditional probabilities according to
intuition is that  introduces no additional terminal ver- Proposition 1 and Remark 1 as follows.
tices, hence H1, H3, and H4 remain satisfied. Since the
vertices in  always have -type variables of exactly two Proposition 1. Let 1 and 2 be decision nodes where
of its neighbours set to true, H5 and H2 remain satisfied. 1 = Parent(2) and Lo(1) = 2,  Lo(2) =
Thus, a simple trip with a disjoint loop component also LLoo##((21))++12 and  Hi(2) = LHoi##((21))++12 .
corresponds to a satisfying assignment of our encoding.</p>
        <p>Lo#() + 1
Hi#() + Lo#() + 2</p>
        <sec id="sec-2-4-1">
          <title>3.2. Learning Parameters from Data</title>
        </sec>
      </sec>
      <sec id="sec-2-5">
        <title>Algorithm 1 ProbLearn - updates counters of  from</title>
        <p>data
Input: PROB  ,  - complete assignment of data
instance
1:  ← rootNode( )
2: if  is ∧-node then
3: for  in Child() do
4: ProbLearn(,  )
5: if  is decision node then
6:  ← getLiteral(, Var(n))
7: if  is positive literal then
8: Hi#() += 1
9: ProbLearn(Hi(n),  )
10: else
11: Lo#() += 1
12: ProbLearn(Lo(n),  )</p>
      </sec>
      <sec id="sec-2-6">
        <title>We introduce algorithm 1, ProbLearn, for updating</title>
        <p>branch counters of PROB  from assignments. In order
to learn branch parameters  Hi() and  Lo() of decision
node , we require a counter for each of its branches,
Hi#() and Lo#() respectively. In the learning process,
we have a dataset of assignments for Boolean variables
in the Boolean formula represented by PROB  . For</p>
      </sec>
      <sec id="sec-2-7">
        <title>Proof. Recall that the Lo branch parameter of 2 is:</title>
        <p>in PROB  1 in Figure 2. Observe that LLoo##((21)) for
PROB  1 in Figure 2 is the conditional probability
 (¬|¬) as it represents the count of traversals that
passed through Lo branch of 2 out of total count of
traversals that passed through Lo branch of 1. A similar
observation can be made for Hi(2).</p>
        <p>Notice that as the Lo#(2) and Lo#(1) becomes
significantly large, that is Lo#(2) &gt;&gt; 1 and
Lo#(1) &gt;&gt; 2:</p>
        <sec id="sec-2-7-1">
          <title>3.3. Sampling Trip Query Answers</title>
        </sec>
      </sec>
      <sec id="sec-2-8">
        <title>Algorithm 2 ProbSample - returns sampled assignment</title>
      </sec>
      <sec id="sec-2-9">
        <title>Input: PROB  , partial assignment  ′</title>
        <p>Output: complete assignment that agrees with  ′
1: caches ,  − ← initCache()
2: for node  in bottom-up ordering of  do
3: if  is ⊤ node then
4: [] − ← ∅ ,  [] − ← 1
5: else if  is ⊥ node then
6: [] − ← Invalid,  [] − ← 0
7: else if  is ∧ node then
8: [] − ← unionChild(Child(), )
9:  [] − ← ∏︀∈Child()  []
10: else
11: if Var() in  ′ then
12: if [ ′[Var()]] is Invalid then
13: [] − ← Invalid,  [] − ← 0
14: else
15: [] − ← followAssign( )
16: if  ′[Var()] is ¬Var() then
17:  [] − ←  Lo() ×  [Lo()]
18: else
19:  [] − ←  Hi() ×  [Hi()]
20: else
21:  − ←  Lo() ×  [Lo()]
22: ℎ − ←  Hi() ×  [Hi()]
23:  [] − ←  + ℎ
24:  − ← Binomial( +ℎℎ )
25: if  is 1 then
26: [] − ← [Hi()] ∪ Var()
27: else
28: [] − ← [Lo()] ∪ ¬Var()
29: return [rootnode( )]</p>
        <p>
          The ability to conditionally sample trips is critical to
handling trip queries for arbitrary start-end vertices, for
which a trip is to be sampled conditioned on the given start
and end vertices. In this work, we adapted the weighted
sampling algorithm using PROB, which was introduced
by prior work [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ], to support conditional sampling and
denote it as ProbSample.
        </p>
      </sec>
      <sec id="sec-2-10">
        <title>Algorithm 2, ProbSample, performs sampling of satis</title>
        <p>fying assignments from a PROB  in a bottom-up manner.</p>
        <p>ProbSample takes an input PROB  and partial
assignment  ′ and returns a sampled complete assignment that
agrees with  ′. The input  ′ specifies the terminal vertices
for a given trip query by assigning the -type variables.</p>
        <p>ProbSample employs two caches  and  , for partially
sampled assignment at each node and joint probabilities
during the sampling process. In the process, ProbSample
performs calculations of joint probabilities at each node.</p>
        <p>In addition, ProbSample stores the partial samples at each
node in . The partial sample for a false node would be
Invalid as it means that an assignment is unsatisfiable.</p>
        <p>On the other hand, the partial sample for a true node is
∅ which will be incremented with variable assignments
during the processing of internal nodes of  . The
partially sampled assignment at every ∧-node  is the union
of the samples of all its child nodes, as the child nodes
have mutually disjoint variable sets due to
decomposability property. For a decision node , if Var() is in  ′, the
partial sample at  will be the union of the literal in  ′
and the partial sample at the corresponding child node
(lines 11 to 19) to condition on  ′. Otherwise, the partial
assignment at  is sampled according to the weighted joint
probabilities  and ℎ (lines 21 to 28). Finally, the output of
ProbSample would be the sampled assignment at the root
node of  . To extend ProbSample to sample  complete
assignments, one has to keep  partial assignments in 
at each node during the sampling process and sample 
independent partial assignments at each decision node.</p>
        <p>Proposition 2. Let PROB  represent Boolean
formula  , ProbSample samples  ∈ Sol( ) according
to the joint branch parameters, that is ∏︀∈Rep ( )[(1 −
) Lo() +  Hi()] where  is 1 if Hi() ∈ Rep ( )
and 0 otherwise.</p>
        <p>Proof. Let  be a PROB that only consists of decision
nodes as internal nodes. At each decision node  in
the bottom-up sampling pass, assignment of Var() is
sampled proportional to  Lo() ×  [Lo()] and  Hi() ×
 [Hi()] to be false and true respectively. Without loss
of generality, we focus on the term  Lo() ×  [Lo()], a
similar argument would follow for the other branch by
symmetry.</p>
        <p>Let 2 denote Lo(). Notice that  [2] is  Lo(2) ×
 [Lo(2)] +  Hi(2) ×  [Hi(2)]. Expanding the
equation, the probability of sampling ¬Var() is  Lo() ×
 Lo(2) ×  [Lo(2)] +  Lo() ×  Hi(2) ×  [Hi(2)]. If
we expand  [Lo()] recursively, we are considering all
possible satisfying assignments of VarSet(Lo()), more
specifically we would be taking the sum of the product of
corresponding branch parameters of each possible
satisfying assignment of VarSet(Lo()).</p>
        <p>Observe that Var() is sampled to be assigned false
with probability  Lo() ×  Lo(2) ×  [Lo(2)] +  Lo() × be the sum over probability of sampling each member
 Hi(2) ×  [Hi(2)]. Similarly, Var(2) is sampled to of  , . Recall that the probability of sampling a
sinbe assigned false with probability  Lo(2) ×  [Lo(2)]. gle assignment  is proportional to ∏︀∈Rep ( )[(1 −
Notice that if we view the bottom-up process in reverse, ) Lo() +  Hi()] by Proposition 2. As such
the probability of sampling ¬Var() and ¬Var(2) is the probability  [ , is sampled] is proportional to
thLeon(f)ol× lo wsLot(ha2t)a × sat is[fLyoin(g2a)s]s.igInnmtehnet gweonuelrdalrecaacshet,hiet ∑︀ ∈ , ∏︀∈Rep ( )[(1 − ) Lo() +  Hi()].
true node which has  value set to 1. It then follows Remark 3. It is worth noting that  [ , is sampled] &gt;
that for each  ∈ Sol( ),  is sampled with probability 0, as all branch parameters are greater than 0 by
defini = ∏︀∈Rep ( )[(1 − ) Lo() +  Hi()]. Notice tion. Recall that branch parameters are computed with
that ∧-nodes have no impact on the sampling probability a ’+1’ in numerator and ’+2’ in denominator, and given
as no additional terms are introduced in the product of that branch counters are 0 or larger, branch parameters
branch parameters. are strictly positive.</p>
        <p>Remark 2. Recall in Remark 1 that  Hi() and  Lo()
are approximately conditional probabilities. By
Proposition 2, assignment  ∈ Sol( ) is sampled with
probability proportional to ∏︀∈Rep ( )[(1 − ) Lo() +
 Hi()]. Notice that if we rewrite the product of branch
parameters as the product of approximate conditional
probability, it is approximately the joint probability of
sampling  .</p>
      </sec>
      <sec id="sec-2-11">
        <title>Refinement In the refinement step, we extract a trip</title>
        <p>from sampled assignment  by removing spurious disjoint
loop components using depth-first search.</p>
        <p>Definition 2. Let ℳ′ : Sol( ) ↦→ SimpleTrip() be the
mapping function of the refinement process, for a given
graph  and its relaxed encoding  . For an assignment
 ∈ Sol( ), let  be the set of vertices in  that have
their -type variables set to true in  . A depth-first search
is performed from the starting vertex on  , removing
disjoint components. The resultant simple path is  =
ℳ′().</p>
        <p>Although ℳ′(· ) is a many-to-one (i.e. surjective)
mapping function, it is not a concern in practice as trips with
disjoint loop components are unlikely to occur in
realworld or synthetic trip data from which probabilities can
be learned.</p>
        <p>Theorem 1. Given ,  ∈ , let  , ∈ SimpleTrip()
be a trip between  and . Let  , = { | ( ∈
Sol( )) ∧ (ℳ′( ) =  ,)}. Then,</p>
        <p>Pr[ , is sampled] ∝
∑︁</p>
        <p>∏︁
 ∈ , ∈Rep ( )</p>
        <p>[(1 − ) Lo() +  Hi()]</p>
      </sec>
      <sec id="sec-2-12">
        <title>Proof. From Definition 1 and 2, one can say that</title>
        <p>given a graph  and its relaxed encoding  , ∀ ∈
SimpleTrip(), ∃ ∈ Sol( ) such that ℳ′( ) =  .</p>
        <p>Notice that sampling  , amounts to sampling  ∈
 , . As such, the probability of sampling  , would</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>4. Experiments</title>
      <sec id="sec-3-1">
        <title>In order to evaluate the efficacy of ProbRoute, we built a</title>
        <p>
          prototype in Python 3.8 with NumPy [28], toposort [29],
OSMnx[30], and NetworkX [31] packages. We employ
KCBox tool1 for OBDD[∧] compilation [
          <xref ref-type="bibr" rid="ref14">14</xref>
          ]. The
experiments were conducted on a cluster of machines with Intel
Xeon Platinum 8272CL processors and 64GB of memory.
In the experiments, we evaluated ProbRoute against an
adaptation of the state-of-the-art probabilistic routing
approach [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ] and an off-the-shelf non-probabilistic routing
library, Pyroutelib3 [32], in terms of quality of trip
suggestions and runtime performance. In particular, we adapted
the state-of-the-art approach by Choi et al [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ] to sample
for trips instead of computing the most probable trip and
refer to the adapted approach as ‘CSD’ in the rest of the
section. In addition, we compared our relaxed encoding to
existing path encodings across various graphs, specifically
to absolute encoding and compact encoding [23].
        </p>
        <p>Through the experiments, we investigate the following:</p>
      </sec>
      <sec id="sec-3-2">
        <title>R1 Can ProbRoute effectively learn from data and sam</title>
        <p>ple quality trips?</p>
      </sec>
      <sec id="sec-3-3">
        <title>R2 How efficient is our relaxed encoding technique?</title>
      </sec>
      <sec id="sec-3-4">
        <title>R3 What is the runtime performance of the ProbRoute?</title>
      </sec>
      <sec id="sec-3-5">
        <title>Data Generation In this study, we use the real</title>
        <p>world road network of Singapore obtained from
OpenStreetMap [33] using OSMnx. The road network graph
 consisted of 23522 vertices and 45146 edges along
with their lengths. In addition, we also use an abstracted
graph2 of  which we denote as  for the remaining
of this section. A vertex in  corresponds to a unique
subgraph of .</p>
      </sec>
      <sec id="sec-3-6">
        <title>1https://github.com/meelgroup/KCBox</title>
        <p>2We use the geohash system (geohash level 5) of partitioning
the road network graph. For more information on the format
http://geohash.org/site/tips.html#format</p>
        <p>Synthetic trips were generated by deviating from short- may be instances where a trip cannot be found on  due
est path given start and end vertices. A random pair of to factors such as a region in  containing disconnected
start and end vertices were selected in  and the shortest components. We incorporated a rejection sampling
propath  was found. Next, the corresponding intermediate cess with a maximum of 400 attempts and 5 minutes to
regions of  in  are blocked in , and a new shortest account for such scenarios.
path  ′ was found and deemed to be the synthetic trip Table 1 shows the match rate statistics of the respective
generated. We generated 11000 synthetic trips, 10000 for methods. Under -Match setting, where  is set as the
training and 1000 for evaluation. While we used  to median edge length of  to account for parallel trips,
keep the trip sampling time reasonable, it is possible to ProbRoute has the highest match rate among the three
use more fine-grained regions for offline applications. approaches. In addition, ProbRoute produced perfect
matches for more than 25% of instances. ProbRoute has
4.1. R1: ProbRoute’s Ability to Learn 0.316 -match rate on median, significantly more than
0.172 for CSD and 0.107 for Pyroutelib. The trend is
sim</p>
        <p>Probabilities ilar for exact matches, where  is set to 0 as shown under
To understand ProbRoute’s ability to learn probabilities the ‘Exact Match’ columns in Table 1. In the exact match
from data, we evaluate its ability to produce trips that setting, ProbRoute achieved a median of 0.310 match
closely resembles the ground truth. Both ProbRoute and rate, almost double that of CSD’s 0.160 median match rate.
CSD, which are sampling-based approaches, were evalu- The evaluation results also demonstrate the usefulness of
ated by sampling 20 trips and taking the median match rate probabilistic approaches such as ProbRoute, especially
for each instance. Recall that the -match rate is defined in scenarios where experienced drivers navigate
accordas the proportion of vertices in the ground truth trip that ing to their own heuristics which may be independent of
were within  meters of euclidean distance from the clos- the shortest trip. In particular, ProbRoute would be able
est vertex in the proposed trip. In the evaluation, we set to learn and suggest trips that align with the unknown
the  tolerance to be the median edge length of , which heuristics of driver preferences given start and destination
is 64.359 meters, to account for parallel trips. To further locations. Thus, the results provide an affirmative answer
emphasize the advantages of probabilistic approaches, we to R1.
included an off-the-shelf routing library, Pyroutelib3 [32],
in the comparison. 4.2. R2: Efficiency of Relaxed Encoding</p>
        <p>In order to conduct a fair comparison, we have adapted
the CSD approach to utilize PROB derived from our re- We compared our relaxed encoding to existing path
enlaxed encoding. Our evaluation utilizes this adapted ap- codings across various graphs, specifically to absolute
proach to sample a trip on  in a stepwise manner, where encoding and compact encoding [23]. In the experiment,
the probability of the next step is conditioned on the pre- we had to adapt compact encoding to CNF form with
vious step and destination. The conditional probabilities Tseitin transformation [34], as CNF is the standard
inare computed in a similar manner to the computations put for compilation tools. We compiled the CNFs of the
of joint probabilities, which are the  cache values, in encodings into OBDD[∧] form with 3600s compilation
the ProbSample. The predicted trip on the road network timeout and compared the size of resultant diagrams. The
 is determined by the shortest trip on the subgraph results are shown in Table 2, with rows corresponding
formed by the sequence of sampled regions. In contrast, to the different encodings used and columns
correspondProbRoute samples a trip on  in a single pass, and ing to different graphs being encoded. Entries with TO
subsequently retrieves the shortest trip on the subgraph indicate that the compilation has timed out.
of the sampled regions as the predicted trip on . It is Table 2 shows that our relaxed encoding consistently
worth noting that for sampling-based approaches, there results in smaller decision diagrams, up to 91× smaller.</p>
        <p>It is also worth noting that relaxed encoding is the only</p>
      </sec>
      <sec id="sec-3-7">
        <title>From the result, ProbRoute is more than one order of</title>
        <p>
          magnitude faster on median than the existing
probabilistic approach CSD. The result also shows that ProbRoute
is also on median more than a magnitude closer to
Pyroutelib’s runtime using the same PROB as compared to
CSD approach. In addition, CSD approach timed out on
650 of the 1000 test instances, while ProbRoute did not
time out. Additionally, as mentioned in [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ], CSD does
not guarantee being able to produce a complete trip from
start to destination. The results in Table 3 highlight the
progress made by ProbRoute in closing the gap between
probabilistic routing approaches and existing system.
        </p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>5. Conclusion</title>
      <p>Whilst we have demonstrated the efficiency of our
approach, there are possible extensions to make our
approach more appealing for wide adoption. In terms of
runtime performance, our approach is three orders of
magnitude slower than existing probability agnostic routing
systems. As such, there is still room for runtime
improvements for our approach to be functional replacements of
existing routing systems. Additionally, our relaxed
encoding only handles undirected graphs at the moment and it
would be of practical interest to extend the encoding to
directed graphs to handle one-way streets. Furthermore, it
would also be of interest to incorporate ideas to improve
runtime performance from existing hierarchical path
finding algorithms such as contractual hierarchies, multi-level
dijkstra and other related works [35, 36, 37].</p>
      <p>In summary, we focused on addressing the scalability
barrier for reasoning over route distributions. To this end,
we contribute two techniques: a relaxation and refinement
approach that allows us to efficiently and compactly
compile routes corresponding to real-world road networks, and
a one-pass route sampling technique. We demonstrated
the effectiveness of our approach on a real-world road
network and observed around 91× smaller PROB, 10×
faster trip sampling runtime and almost 2× the match
rate of state-of-the-art probabilistic approach, bringing
probabilistic approaches closer to achieving comparable
runtime to traditional routing tools.</p>
    </sec>
    <sec id="sec-5">
      <title>Acknowledgments</title>
      <sec id="sec-5-1">
        <title>We sincerely thank Yong Lai for the insightful discus</title>
        <p>sions and reviewers for the constructive feedbacks. This
work is supported in part by grants – S18-1198-IPP-II,
NRF-NRFFAI1-2019-0004, MOE-T2EP20121-0011, and
R-252-000-B59-114. Suwei Yang is supported by the
Grab-NUS AI Lab, a joint collaboration between
GrabTaxi Holdings Pte. Ltd., National University of
Singapore, and the Industrial Postgraduate Program (Grant:
S18-1198-IPP-II) funded by the Economic Development
Board of Singapore. Kuldeep S. Meel is supported in
part by National Research Foundation Singapore under
its NRF Fellowship Programme
(NRF-NRFFAI1-20190004), Ministry of Education Singapore Tier 2 grant
(MOE-T2EP20121-0011), and Ministry of Education
Singapore Tier 1 Grant (R-252-000-B59-114).
tomizable route planning, in: Proceedings of the
10th International Symposium on Experimental
Algorithms, 2011.
[36] R. Geisberger, P. Sanders, D. Schultes, C. Vetter,
Exact routing in large road networks using contraction
hierarchies, Transp. Sci. (2012).
[37] K. C. K. Lee, W.-C. Lee, B. Zheng, Y. Tian, Road:
A new spatial object search framework for road
networks, IEEE Transactions on Knowledge and Data
Engineering (2012).</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>R. R.</given-names>
            <surname>Clewlow</surname>
          </string-name>
          , G. Mishra,
          <article-title>Disruptive transportation: The adoption, utilization, and impacts of ride-hailing in the united states</article-title>
          ,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>J.</given-names>
            <surname>Collison</surname>
          </string-name>
          ,
          <article-title>The impact of online food delivery services on restaurant sales (</article-title>
          <year>2020</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>S.</given-names>
            <surname>Banerjee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Riquelme</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Johari</surname>
          </string-name>
          ,
          <article-title>Pricing in rideshare platforms: A queueing-theoretic approach</article-title>
          , Econometrics: Econometric &amp;
          <string-name>
            <surname>Statistical Methods - Workshops)</surname>
          </string-name>
          (
          <year>2011</year>
          )
          <fpage>689</fpage>
          -
          <lpage>690</lpage>
          . Special Topics eJournal (
          <year>2015</year>
          ). [21]
          <string-name>
            <given-names>R.</given-names>
            <surname>Peharz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Lang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Vergari</surname>
          </string-name>
          , K. Stelzner,
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Fu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Ye</surname>
          </string-name>
          ,
          <article-title>Learning to estimate the A</article-title>
          .
          <string-name>
            <surname>Molina</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Trapp</surname>
          </string-name>
          , G. V. den Broeck, K. Kertravel time,
          <source>Proceedings of the 24th ACM SIGKDD sting, Z. Ghahramani</source>
          ,
          <article-title>Einsum networks: Fast and International Conference on Knowledge Discovery scalable learning of tractable probabilistic circuits, &amp; Data Mining (</article-title>
          <year>2018</year>
          ). in: ICML,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>J.</given-names>
            <surname>Letchner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Krumm</surname>
          </string-name>
          , E. Horvitz, Trip router with [22]
          <string-name>
            <given-names>A.</given-names>
            <surname>Paszke</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Gross</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Massa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Lerer</surname>
          </string-name>
          ,
          <string-name>
            <surname>J.</surname>
          </string-name>
          <article-title>Bradindividualized preferences (trip): Incorporating per- bury</article-title>
          , G. Chanan,
          <string-name>
            <given-names>T.</given-names>
            <surname>Killeen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Lin</surname>
          </string-name>
          ,
          <string-name>
            <surname>N.</surname>
          </string-name>
          <article-title>Gimelshein, sonalization into route planning</article-title>
          ,
          <source>in: AAAI</source>
          ,
          <year>2006</year>
          . L.
          <string-name>
            <surname>Antiga</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Desmaison</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Köpf</surname>
          </string-name>
          , E. Yang,
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>R. K.</given-names>
            <surname>Ahuja</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Mehlhorn</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Orlin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. E.</given-names>
            <surname>Tarjan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>DeVito</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Raison</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Tejani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Chilamkurthy</surname>
          </string-name>
          ,
          <article-title>Faster algorithms for the shortest path problem</article-title>
          , Jour-
          <string-name>
            <given-names>B.</given-names>
            <surname>Steiner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Fang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Bai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Chintala</surname>
          </string-name>
          ,
          <string-name>
            <surname>Pytorch:</surname>
          </string-name>
          <article-title>An nal of the ACM (JACM) 37 (</article-title>
          <year>1990</year>
          )
          <fpage>213</fpage>
          -
          <lpage>223</lpage>
          .
          <article-title>imperative style, high-performance deep learning</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>D. J.</given-names>
            <surname>Rosenkrantz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. E.</given-names>
            <surname>Stearns</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. M.</given-names>
            <surname>Lewis</surname>
          </string-name>
          , Ap- library, in: NeurIPS,
          <year>2019</year>
          .
          <article-title>proximate algorithms for the traveling salesperson [23] S. Prestwich, Sat problems with chains of dependent problem</article-title>
          ,
          <source>in: 15th Annual Symposium on Switching variables, Discret. Appl</source>
          . Math.
          <volume>130</volume>
          (
          <year>2003</year>
          )
          <fpage>329</fpage>
          -
          <lpage>350</lpage>
          . and
          <string-name>
            <given-names>Automata</given-names>
            <surname>Theory</surname>
          </string-name>
          (swat
          <year>1974</year>
          ), IEEE,
          <year>1974</year>
          , pp. [24]
          <string-name>
            <given-names>D. E.</given-names>
            <surname>Knuth</surname>
          </string-name>
          ,
          <source>The art of computer programming</source>
          ,
          <volume>33</volume>
          -
          <fpage>42</fpage>
          . volume
          <volume>4</volume>
          , fascicle 2:
          <article-title>Generating all tuples and per-</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>A.</given-names>
            <surname>Choi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Shen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Darwiche</surname>
          </string-name>
          , Tractability in struc- mutations,
          <year>2005</year>
          .
          <article-title>tured probability spaces</article-title>
          ,
          <source>in: NeurIPS</source>
          , volume
          <volume>30</volume>
          , [25]
          <string-name>
            <given-names>T.</given-names>
            <surname>Inoue</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Iwashita</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Kawahara</surname>
          </string-name>
          ,
          <string-name>
            <surname>S.</surname>
          </string-name>
          <source>ichi Minato</source>
          ,
          <year>2017</year>
          , pp.
          <fpage>3477</fpage>
          -
          <lpage>3485</lpage>
          . Graphillion:
          <article-title>software library for very large sets of</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Shen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Choi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Darwiche</surname>
          </string-name>
          , Conditional psdds: labeled graphs,
          <source>International Journal on Software Modeling</source>
          and
          <article-title>learning with modular knowledge, in: Tools for Technology Transfer (</article-title>
          <year>2016</year>
          ). AAAI,
          <year>2018</year>
          . [26]
          <string-name>
            <given-names>J.</given-names>
            <surname>Kawahara</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Inoue</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Iwashita</surname>
          </string-name>
          , S. ichi Mi-
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Shen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Goyanka</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Darwiche</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Choi</surname>
          </string-name>
          , Struc- nato,
          <article-title>Frontier-based search for enumerating all tured bayesian networks: From inference to learning constrained subgraphs with compressed represenwith routes</article-title>
          ,
          <source>in: AAAI</source>
          ,
          <year>2019</year>
          . tation,
          <source>IEICE Trans. Fundam</source>
          . Electron. Commun.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>A.</given-names>
            <surname>Choi</surname>
          </string-name>
          , G. Van den Broeck, A. Darwiche, Probabil- Comput. Sci. (
          <year>2017</year>
          ).
          <article-title>ity distributions over structured spaces</article-title>
          , in: AAAI, [27]
          <string-name>
            <surname>C. D. Manning</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Raghavan</surname>
          </string-name>
          , H. Schütze, Introduc2015. tion to information retrieval,
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>S.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Liang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K. S.</given-names>
            <surname>Meel</surname>
          </string-name>
          , Inc: A scalable [28]
          <string-name>
            <given-names>C.</given-names>
            <surname>Harris</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K. J.</given-names>
            <surname>Millman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Walt</surname>
          </string-name>
          ,
          <string-name>
            <surname>R.</surname>
          </string-name>
          <article-title>Gommers, incremental weighted sampler</article-title>
          ,
          <source>in: FMCAD</source>
          <year>2022</year>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Virtanen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Cournapeau</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Wieser</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Taylor</surname>
          </string-name>
          ,
          <year>2022</year>
          , p.
          <fpage>205</fpage>
          .
          <string-name>
            <given-names>S.</given-names>
            <surname>Berg</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Smith</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Kern</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Picus</surname>
          </string-name>
          , S. Hoyer,
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>A.</given-names>
            <surname>Darwiche</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Marquis</surname>
          </string-name>
          ,
          <string-name>
            <surname>A knowledge compilation</surname>
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Kerkwijk</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Brett</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Haldane</surname>
            ,
            <given-names>J. F.</given-names>
          </string-name>
          <article-title>del map</article-title>
          ,
          <source>J. Artif. Intell. Res</source>
          .
          <volume>17</volume>
          (
          <year>2002</year>
          )
          <fpage>229</fpage>
          -
          <lpage>264</lpage>
          . R'io, M. Wiebe,
          <string-name>
            <given-names>P.</given-names>
            <surname>Peterson</surname>
          </string-name>
          ,
          <string-name>
            <surname>P.</surname>
          </string-name>
          <article-title>G'erard-</article-title>
          <string-name>
            <surname>Marchant</surname>
          </string-name>
          ,
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Lai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Yin</surname>
          </string-name>
          ,
          <article-title>New canonical representa-</article-title>
          K. Sheppard,
          <string-name>
            <given-names>T.</given-names>
            <surname>Reddy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Weckesser</surname>
          </string-name>
          , H. Abbasi,
          <article-title>tions by augmenting obdds with conjunctive decom- C.</article-title>
          <string-name>
            <surname>Gohlke</surname>
            ,
            <given-names>T. E.</given-names>
          </string-name>
          <string-name>
            <surname>Oliphant</surname>
          </string-name>
          ,
          <article-title>Array programming with position</article-title>
          ,
          <source>Journal of Artificial Intelligence Research numpy, Nature</source>
          <volume>585</volume>
          (
          <year>2020</year>
          )
          <fpage>357</fpage>
          -
          <lpage>362</lpage>
          . 58 (
          <year>2017</year>
          )
          <fpage>453</fpage>
          -
          <lpage>521</lpage>
          . [29]
          <string-name>
            <given-names>E. V.</given-names>
            <surname>Smith</surname>
          </string-name>
          , toposort,
          <year>2022</year>
          . URL: https://pypi.org/
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>R. E.</given-names>
            <surname>Bryant</surname>
          </string-name>
          ,
          <article-title>Graph-based algorithms for boolean project/toposort/</article-title>
          . function manipulation,
          <source>IEEE Trans. Comput</source>
          .
          <volume>35</volume>
          [30]
          <string-name>
            <given-names>G.</given-names>
            <surname>Boeing</surname>
          </string-name>
          , Osmnx: New methods for acquiring, (
          <year>1986</year>
          )
          <fpage>677</fpage>
          -
          <lpage>691</lpage>
          . constructing, analyzing, and visualizing complex
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>A.</given-names>
            <surname>Darwiche</surname>
          </string-name>
          ,
          <article-title>Sdd: A new canonical representation street networks, Econometrics: Computer Programs of propositional knowledge bases</article-title>
          ,
          <source>in: IJCAI</source>
          ,
          <year>2011</year>
          . &amp;
          <article-title>Software eJournal (</article-title>
          <year>2017</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>R.</given-names>
            <surname>Mateescu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Dechter</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Marinescu</surname>
          </string-name>
          , And/or [31]
          <string-name>
            <given-names>A. A.</given-names>
            <surname>Hagberg</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. A.</given-names>
            <surname>Schult</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Swart</surname>
          </string-name>
          ,
          <article-title>Exploring multi-valued decision diagrams (aomdds) for graph- network structure, dynamics, and function using ical models</article-title>
          ,
          <source>J. Artif. Intell. Res</source>
          . (
          <year>2008</year>
          ). networkx,
          <source>in: Proceedings of the 7th Python in</source>
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>G.</given-names>
            <surname>Boole</surname>
          </string-name>
          ,
          <article-title>An investigation of the laws of thought</article-title>
          : Science Conference,
          <year>2008</year>
          , pp.
          <fpage>11</fpage>
          -
          <lpage>15</lpage>
          .
          <article-title>On which are founded the mathematical</article-title>
          theories of [32]
          <string-name>
            <given-names>O.</given-names>
            <surname>White</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Kuranowski</surname>
          </string-name>
          , Pyroutelib3,
          <year>2017</year>
          . URL: logic and probabilities,
          <year>1854</year>
          . https://github.com/MKuranowski/pyroutelib3.
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Choi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Vergari</surname>
          </string-name>
          , G. V. den Broeck, Probabilistic [33]
          <article-title>OpenStreetMap contributors, Planet dump recircuits: A unifying framework for tractable proba- trieved from https://planet</article-title>
          .osm.org , https://www. bilistic models,
          <year>2020</year>
          . openstreetmap.org,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>H.</given-names>
            <surname>Poon</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. M.</given-names>
            <surname>Domingos</surname>
          </string-name>
          , Sum-product networks: [34]
          <string-name>
            <given-names>G. S.</given-names>
            <surname>Tseitin</surname>
          </string-name>
          ,
          <article-title>On the complexity of derivation in A new deep architecture</article-title>
          ,
          <source>2011 IEEE International propositional calculus</source>
          ,
          <year>1983</year>
          . Conference on Computer Vision Workshops (ICCV [35]
          <string-name>
            <given-names>D.</given-names>
            <surname>Delling</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Goldberg</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Pajor</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Werneck</surname>
          </string-name>
          , Cus-
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>