<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Random-Walk Closeness Centrality Satis es Boldi-Vigna Axioms</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Ricardo Mora</string-name>
          <email>rmora@dcc.uchile.cl</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Claudio Gutierrez</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Center for Semantic Web Research, Dept. Computer Science, Universidad de Chile</institution>
        </aff>
      </contrib-group>
      <abstract>
        <p>Recently Boldi and Vigna proposed axioms that would characterize good notions of centrality. We study a random-walk version of closeness centrality and prove that is satis es Boldi-Vigna axioms for non-directed graphs.</p>
      </abstract>
      <kwd-group>
        <kwd>Random Walks</kwd>
        <kwd>RDF</kwd>
        <kwd>Centrality</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>Consider the Euclidean plane and a set of n points: Which one is the most
central? An intuitive selection would be the point p that minimizes the sum of
the distances from the other points to p. Consider now a set of n cities: In which
one (abstracting social constraints) would you install a delivery store? Clearly
in one that minimizes the sum of distances of each city to it (here distance
is not Euclidean, but highways). A similar problem can be found inside a city
(where distance is something close to Manhattan's). In this paper we address this
problem in the general case of undirected graphs, motivated by its application
to semantic networks (particularly RDF graphs).</p>
      <p>This is not only a nice theoretical problem. One of the big challenges that
the web o ers today has to do with the huge quantity of data that it contains. In
particular, in the case of large knowledge networks in the form of RDF graphs,
it is highly relevant to understand which ones are the \essential" concepts they
represent.</p>
      <p>
        What is the \good" distance in this case? There is some evidence [
        <xref ref-type="bibr" rid="ref1 ref2">1, 2</xref>
        ] that
using a distance based on random walks might be a fruitful idea. It turns out, as
we will show, that the idea of selecting a node v that minimizes the sum of the
random walk distances from each other node u to v works really well in RDF
graphs [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. In this paper we study this notion and test it with the Boldi Vigna
axioms.
      </p>
      <p>
        The problem of detecting central nodes in a graph has been extensively
investigated [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] and centrality indicators like degree and others based on shortest
distances between elements such as betweenness centrality and closeness have
been successfully employed on a variety of networks. By trying to unify these
manifold centrality measures, recently Boldi and Vigna [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] proposed a set of
axioms that would capture the essential properties that underlie all of them. They
show that classic notions such as closeness, degree, betweenness centrality do
not satisfy these demanding axioms. In this paper we apply a Bodi Vigna test to
random walk closeness centrality. We prove that this centrality notion satis es
these axioms for non-directed graphs.
2
2.1
      </p>
    </sec>
    <sec id="sec-2">
      <title>Preliminaries</title>
      <p>Basic Graph Theoretical Notions
An undirected and simple graph (from now on we will work only with this kind
of graph) is a pair G = (V; E) where E [V ]2, and [V ]2 is the set of all
2elements subsets from V . The elements of V are the vertices of G, and the ones
from E are its edges. When necessary, we will use the notation V (G) and E(G)
for those sets. From now on, an element fu; vg 2 E will be denoted simply by
uv. An important family of graphs are cliques: for n 1 a n{clique is a graph
Kn := (V; E) with jV j = n, such that E = [V ]2:</p>
      <p>A vertex u is said to be a neighbor of another vertex v, when uv 2 E. Note
that the de nition of E implies that v is also a neighbor of u. The set of neighbors
of v will be denoted by NG(v). The degree of v, dG(v) is the size of NG(v). Should
the reference be clear, they will simply be denoted by N (v) and d(v).</p>
      <p>Let G = (V; E) and G0 = (V 0; E0) be two graphs such that V V 0 and
E E0, then G0 is said to be a subgraph of G (it is also said that G contains
G0). For a subset S V , G[S] := (S; fuv 2 E : u; v 2 Sg) and G S := G[V n S].
Analogously, for F E, G F := (V; E n F ).</p>
      <p>A graph Pn = fv0; v1; :::; vng; fv0v1; v1v2; :::; vn 1vng with n 0, where all
vi are distinct is called a path, and the number of edges in it is its length. A cycle
is a special type of path such that v0 = vn. We will call a cycle of length n a
n{cycle.</p>
      <p>Let G = (V; E) be a graph and u; v 2 V two distinct vertices. A path Pn in
G, with n 1 such that v0 = u and vn = v, is called a u{v path. Also G is said
to be connected if for all distinct u; v 2 V a u{v path exists in G. A connected
component of G is a maximally connected subgraph H. Note that a connected
graph has only one connected component. An edge uv of G is said to be a bridge
if the graph G uv contains at least one more connected component than G.
2.2</p>
      <p>
        Random Walks
The next de nitions come from the work of Lovasz in random walk theory [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ].
      </p>
      <p>Let G = (V; E) be a connected graph such that jV j = n and jEj = m, where
n; m 2 N. Formally, a random walk is a sequence of vertices obtained as follows:
it starts at a vertex v0, and if at the t-th step it is at a vertex vt = u, it moves
to a neighbor v of u with probability puv = 1=d(u). Note that the sequence of
random vertices (vt : t = 0; 1; :::) is a Markov chain.</p>
      <p>Pt will denote the distribution of vt: Pt(v) = P(vt = v). The vertex v0 may
be xed, but may also be drawn from an initial distribution P0. This initial
distribution is said to be stationary if P1 = P0 (which will imply that Pt =
P0 8t 0, because of the construction of the random walk). It can be easily
proved that the distribution (v) := d(v)=2m is stationary for every graph G.
From now on will be referred simply as the stationary distribution (it is not
di cult to prove that this distribution is unique, which makes this reference
valid).</p>
      <p>De nition 1. The hitting time H(u; v) is the expected number of steps that a
random walk starting at vertex u takes to reach vertex v for the rst time.
De nition 2. Let S be a subset from V . The hitting time for a set H(u; S) is
the expected number of steps that a random walk starting at vertex u takes to
reach some vertex in S for the rst time.</p>
      <p>When talking about H(S; u), a distribution (based on S) for the starting vertex
of the random walk has to be speci ed. Therefore, if P is that distribution,
HP(S; u) will be the expected number of steps that a random walk starting at a
vertex of S (selected according to P) takes to reach vertex u for the rst time.
Note that for every pair of vertices u; v</p>
      <p>H(u; v) = H(u; fvg) = HP(fug; v) :
because the only starting distribution P in fug is the trivial one.
3</p>
    </sec>
    <sec id="sec-3">
      <title>Random</title>
    </sec>
    <sec id="sec-4">
      <title>Walk Closeness Centrality</title>
      <p>
        We are now in a position to formalize our notion of random walk centrality. The
de nition is motivated by several insights coming from di erent sources, but
mainly from actual semantics graphs (RDF graphs). From a formal point of view
{and this is the motivation of this paper{ it satis es (as it will be proven later)
the three axioms of centrality proposed by Boldi and Vigna [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. It is important
to note that most centrality measures do not satisfy them all, thus making this
notion of centrality interesting.
      </p>
      <p>
        De nition 3. [cf. [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]] Given a connected graph G = (V; E) and a vertex v 2 V ,
the Random Walk Closeness Centrality of v is the real number
h.(v) = X H(w; v):
w2V
w6=v
The smaller h.(v) is, the more central v is. A similar notion of centrality based
on random walks was proposed by Noh and Rieger [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. In fact, the de nition
proposed here is a particular case of a more general notion that includes both,
Noh and Rieger's and ours, but that will not be studied in this paper.
      </p>
    </sec>
    <sec id="sec-5">
      <title>General Properties</title>
      <p>To prove that random walk closeness centrality satis es Boldi and Vigna axioms,
rst we will need some properties.</p>
      <p>Proposition 4. Let u,v be distinct vertices of a connected graph G, and S
V n fu; vg be such that every u{v path contains some vertex from S. Then</p>
      <p>H(u; v) = H(u; S) + HPu;S (S; v);
where Pu;S is the distribution for random walks that start in S, such that for all
w in that set, Pu;S (w) is the probability that w is the rst vertex from S that a
random walk starting in u reaches.</p>
      <p>Proof. Let u be the sample space containing all possible outcomes associated
to random walks that start in u and occur in G. Similarly, let S be the one
associated to random walks that start in any vertex w of S for which Pu;S (w) &gt; 0.
Consider the random variables TuS : u ! R; TSv : S ! R and Tuv : u ! R
de ned as follows
TuS (!) := # of steps that ! takes in order to reach some vertex in S for the
rst time.</p>
      <p>TSv(!) := # of steps that ! takes in order to reach vertex v for the rst time.
Tuv(!) := # of steps that ! takes in order to reach vertex v for the rst time.</p>
      <p>TuS (!). Namely, X is also a random variable that
De ne X(!) := Tuv(!)
satis es X : u ! R and
X(w) = # of steps that ! takes (after reaching S for the rst time) in order
to reach vertex v for the rst time.</p>
      <p>For ! 2 u write ! = (u; v1; v2:::) and de ne iS (!) := mini&gt;0fvi 2 Sg and
iv(!) := mini&gt;0fvi = vg. Also, de ne !S := (viS ; viS+1; :::) and j(!S ) :=
mini&gt;0fvi+iS = vg. Note that j(!S ) = iv(!) iS (!) and that !S is an
element of S , because random walk are Markov process. Then, for n 2 N
P(X(!) = n) = X P(viS(!) = w)P(iv(!)
iS (!) = n)
w2S
= X Pu;S (w)P(j(!S ) = n) = P(TSv(!S ) = n) :</p>
      <p>w2S
Therefore, X and TSv are random variables with the same expected value.
Finally, by using this</p>
      <p>H(u; v) = E(Tuv) = E(TuS + X) = E(TuS ) + E(X)
= E(TuS ) + E(TSv) = H(u; S) + HPu;S (S; v) :
Corollary 5. Let u,w,v be three distinct vertices of a connected graph G such
that every u{v path contains w. Then</p>
      <p>
        H(u; v) = H(u; w) + H(w; v) :
Proof. It follows directly from proposition 4 by considering S = fwg:
tu
Before stating the next theorem, we need the following result from Lovasz [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]
and one more de nition.
      </p>
      <p>
        Lemma 6 (Lovasz [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]). The probability that a random walk starting at u visits
v before returning to u is
      </p>
      <p>1
(H(u; v) + H(v; u)) (u)
De nition 7. For a bridge uv of a connected graph G = (V; E), de ne Gu as</p>
      <p>Gu := G[fw 2 V : 8 w{v path in G; u 2 w{vg];
that is, Gu and Gv are the connected components of G
uv (see Fig. 1 below).
: : :
: : :
: : :
u
v
: : :
: : :
Theorem 8. Let uv be a bridge of a connected graph G. Then</p>
      <p>H(u; v) = 2jE(Gu)j + 1 :
Proof. First note that any random walk starting at u has to go through v before
stepping into another vertex of Gv, therefore H(u; v) does not depend on Gv.
Because of this, for simplicity, we can assume that Gv = (v; ;). Then, H(v; u) = 1
and jE(G)j = jE(Gu)j + 1. Now, the probability that a random walk starting at
u reaches v before returning to u is 1=d(u). Then, it follows from lemma 6 that
d(u) = (H(u; v) + H(v; u)) (u) = (H(u; v) + 1)
= (H(u; v) + 1)</p>
      <p>d(u)
2jE(Gu)j + 2
;</p>
      <p>d(u)
2jE(G)j
tu
which is equivalent to H(u; v) = 2jE(Gu)j + 1, that is what we wanted to prove.
Finally, using these properties we can prove the following result that will allow
us to compare the centrality of di erent vertices, under certain conditions.
Proposition 9. Let uv be a bridge of a connected graph G. Then
h.(u) &lt; h.(v) () (2jE(Gu)j + 1)jV (Gu)j &gt; (2jE(Gv)j + 1)jV (Gv)j :
Proof. The proof is straightforward and is included in the appendix.
5</p>
    </sec>
    <sec id="sec-6">
      <title>Boldi and Vigna Axioms</title>
      <p>We can now prove what was previously promised.</p>
      <p>
        Boldi and Vigna [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] seek to de ne certain axioms that provide a formal and
provable piece of information about a centrality measure so it can be assured
that it correctly captures the intuitive notion of centrality. To this end, they
propose to study the behavior of the measure when making changes of size, local
edge density and addition of edges in the graph. It is important to note that
the axioms were designed primarily for measures that work with directed, and
not necessarily connected graphs. For graphs representing semantic networks,
direction is not relevant because the predicate represented by the directed edge
represents at the same time the inverse predicate. Therefore, we work with a
version of the original axioms adapted for connected and undirected graphs.
      </p>
      <p>First is the Size Axiom. The idea is to compare the centrality of vertices from
a clique and a cycle joined through a path of large enough size. When xing the
size of one of them, and letting the other grow as much as wanted, one would
expect the vertices of the latter to become more central. This is formalized as
follows:
Axiom 1: (Size axiom) Consider the graph Sk;p made by a k-clique and a
p-cycle connected by a path of length l (see Fig. 2). A centrality measure
satises the size axiom if for every k there are two constants Pk; Lk such that for all
p Pk, l Lk, the centrality of any vertex of the p-cycle is strictly better than
the centrality of any vertex in the k-clique, and the same holds when inverting
the situation (that is, xing p and letting k be as big as desired).
k
0
1
: : :
Proof. We will prove it for the rst case as for the inverted situation the proof
is analogous. For simplicity of notation we will use the labels proposed on Fig.
2 when referring to the nodes of Sk;p. Also, we will denote by K the subgraph of
Sk;p that corresponds to the clique.</p>
      <p>First note that it is not di cult to prove that vertex 0 is the most central
vertex of the clique, and that cp is the one from the cycle with worst centrality.
Therefore, if we prove that cp is more central than 0, we will have proved the
result. Indeed, we have that</p>
      <p>h.(cp) equals
p 1
= 2 X[H(cj ; l)
j=1
l 1
X[H(j; 0)
j=1
H(cj ; cp)] +</p>
      <p>H(j; l)] + H(cp; l) + 2pH(l; 0)
(1)
Using these two facts and (1), we have that h.(0)
than
&gt; H(cp; l) + 2pH(l; 0)
kH(0; l)</p>
      <p>H(l; cp)(k + l)
&gt; 2pH(0; l)
kH(0; l)</p>
      <p>H(l; cp)(k + l)
h.(cp) is strictly greater
p(p
k(k
2
1)
1)
k(k</p>
      <p>1)
2</p>
      <p>+ l
+ l
= 2pl(4p + l)
kl(k(k
1) + 1)
p p +
+ l (k + l)
p(p
&gt; p2
Finally, note that the second and third therm have order O(p) and O(1)
respectively. Therefore, because of (2) we have that for p large enough
h.(0)</p>
      <p>h.(cp) &gt; 0;
that is, cp is more central than vertex 0. tu</p>
      <p>Second is the Density Axiom. For this case two cycles of the same size are
connected through a bridge. By symmetry both end points of the bridge have
the same centrality. What should happen if we increase the number of edges of
one of the cycles until it becomes a clique? Well, the centrality of the endpoint
connected to the cycle that is becoming a clique should also increase. Formally
stated:
Axiom 2: (Density axiom) Consider the graph Dk;p made by a k-clique and
a p-cycle (p; k &gt; 3) connected by a bridge uv, where u is a vertex of the clique
and v one from the cycle. A centrality measure satis es the density axiom if for
k = p, u is strictly more central than v.</p>
      <p>Proof. First, remember de nition 7 made for bridges and note that in this case
Gu corresponds to the k-clique and Gv to the p-cycle. Also, note that a k-clique
has exactly k(k 1)=2 edges, while a p-cycle has p. Therefore, by using this fact
and proposition 9 we have that
k &gt; 3 ^ k = p =) k3</p>
      <p>3k2 &gt; 0
() k3
() k(k(k</p>
      <p>k(k
() k 2
k2 + k &gt; 2k2 + k
1) + 1) &gt; k(2k + 1)
2
1)
+ 1</p>
      <p>&gt; p(2p + 1)
() jV (Gu)j(2jE(Gu)j + 1) &gt; jV (Gv)j(2jE(Gv)j + 1)
()</p>
      <p>h.(u) &lt; h.(v)
that is, u is strictly more central than v. tu</p>
      <p>Finally, there is the Monotonicity Axiom. It states that when adding an edge
to a graph that originally did not have it, the centrality of both endpoints should
increase.</p>
      <p>Axiom 3: (Monotonicity axiom) Consider an arbitrary graph G = (V; E)
(with jV j 2) and a pair of vertices u; v of G such that uv 2= E. A centrality
measure satis es the monotonicity axiom if when we add uv to G, the centrality
of both vertices improves.</p>
      <p>Proof. Note that is enough to prove it for vertex u. Write e = uv and de ne
G0 := (V; E [ e). Also, we will use the notation HG(u; v) and HG0 (u; v) for the
old and new hitting times respectively, and we will do similarly for the centrality
values hG.0 (u) and hG.(u). Note that because we are only adding an edge, the set
of vertices remain the same for both graphs, and therefore, we can refer to it
simply by V .</p>
      <p>We have that 8 w 2 V n fug, HG(w; u) &gt; HG0(w; u). Indeed, whenever a
random walk steps into v, in G0 has the opportunity of going through e to reach
u in only one more step, whereas in G it has to neccesarily take one more step
into a neighbor of v, and then in the best case scenario, another one to reach u.
Therefore, by using this we have that
hG.0 (u) = X HG0 (w; u) &lt; X HG(w; u) = hG.(u)
We studied a notion of centrality based on random walks over non-directed
graphs. Besides experimental evidence (that we do not show in this article) this
notion has nice theoretical properties.</p>
      <p>Although in this paper we concentrated in proving that it satis es the recently
introduced axioms of centrality by Boldi-Vigna, the techniques used give an
insight of their potential. In fact, it can be proved that our notion of centrality
captures ne properties of central nodes in undirected graphs. In a future paper
we will present these results.</p>
      <p>Acknowledgments. The authors thank funding to Millennium Nucleus Center
for Semantic Web Research under Grant NC120004.
7</p>
    </sec>
    <sec id="sec-7">
      <title>Appendix</title>
      <p>Proposition 9. Let uv be a bridge of a connected graph G. Then
h.(u) &lt; h.(v) () (2jE(Gu)j + 1)jV (Gu)j &gt; (2jE(Gv)j + 1)jV (Gv)j
Proof. Note that h.(u) is equal to
= X</p>
      <p>H(w; u)
H(w; u) +</p>
      <p>H(w; u) + H(v; u)
H(w; u) +</p>
      <p>X [H(w; v) + H(v; u)] + H(v; u)
=
=
=
=
&lt;
=
=
=
w2V (G)
w6=u</p>
      <p>X
w2V (Gu)
w6=u</p>
      <p>X
w2V (Gu)
w6=u</p>
      <p>X
w2V (Gu)
w6=u</p>
      <p>X
w2V (Gu)
w6=u</p>
      <p>X
w2V (Gu)
w6=u</p>
      <p>X
w2V (Gu)</p>
      <p>w6=u
w2V (Gu)
w6=u</p>
      <p>X
w2V (Gu)</p>
      <p>w6=u
= X</p>
      <p>H(w; v)
w2V (G)</p>
      <p>w6=v
= h.(v) :tu</p>
      <p>H(w; u) +
H(w; u) +
H(w; u) +
H(w; u) +</p>
      <p>X
w2V (Gv)</p>
      <p>w6=v
w2V (Gv)
w6=v</p>
      <p>X
w2V (Gv)
w6=v</p>
      <p>X
w2V (Gv)
w6=v</p>
      <p>X
w2V (Gv)
w6=v</p>
      <p>X
w2V (Gv)
w6=v</p>
      <p>X
w2V (Gv)
w6=v</p>
      <p>H(w; v) + jV (Gv)jH(v; u)
H(w; v) + jV (Gv)j(2jE(Gv)j + 1)
H(w; v) + jV (Gu)j(2jE(Gu)j + 1)
H(w; v) + jV (Gu)jH(u; v)</p>
      <p>X
w2V (Gv)</p>
      <p>w6=v
X [H(w; u) + H(u; v)] +</p>
      <p>H(w; v) + H(u; v)
H(w; v) +</p>
      <p>H(w; v) + H(u; v)</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <given-names>Thad</given-names>
            <surname>Huges</surname>
          </string-name>
          &amp; Daniel
          <string-name>
            <surname>Ramge</surname>
          </string-name>
          (
          <year>2007</year>
          ).
          <article-title>Lexical semantics relatedness with random graph walks</article-title>
          .
          <source>In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning</source>
          , pp.
          <fpage>581</fpage>
          -
          <lpage>589</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Joshua</surname>
            <given-names>T.</given-names>
          </string-name>
          <string-name>
            <surname>Abbott</surname>
          </string-name>
          , Joseph L. Austerweil, Thomas L.
          <article-title>Gri ths (</article-title>
          <year>2012</year>
          ).
          <article-title>Human memory search as a random walk in a semantic network</article-title>
          .
          <source>In Advances in Neural Information Processing Systems</source>
          <volume>25</volume>
          , pp.
          <fpage>3050</fpage>
          -
          <lpage>3058</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <given-names>Camilo</given-names>
            <surname>Garrido Garc</surname>
          </string-name>
          <article-title>a (</article-title>
          <year>2013</year>
          ). Resumenes Semiautomaticos de Conocimiento: caso de RDF. In Memoria para optar al t tulo de Ingeniero Civil en Computacion, http://repositorio.uchile.cl/bitstream/handle/2250/113509/cf-garrido_ cg.pdf?sequence=1&amp;isAllowed=y.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Linton</surname>
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Freeman</surname>
          </string-name>
          (
          <year>1978</year>
          /79).
          <article-title>Centrality in Social Networks Conceptual Clari cation</article-title>
          .
          <source>In Social Networks 1</source>
          , pp.
          <fpage>2125</fpage>
          -
          <lpage>239</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <given-names>Paolo</given-names>
            <surname>Boldi</surname>
          </string-name>
          , Sebastiano
          <string-name>
            <surname>Vigna</surname>
          </string-name>
          (
          <year>2013</year>
          ).
          <article-title>Axioms for Centrality</article-title>
          .
          <source>In CoRR</source>
          , vol.
          <source>abs/ 1308</source>
          .2140.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6. L.
          <string-name>
            <surname>Lovasz</surname>
          </string-name>
          (
          <year>1993</year>
          ).
          <article-title>Random Walks on Graphs; A Survey</article-title>
          .
          <source>In Boltai Soc., Math. Studies 2</source>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>46</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <given-names>Joe</given-names>
            <surname>Dong</surname>
          </string-name>
          <string-name>
            <given-names>Noh</given-names>
            , Heiko
            <surname>Rieger</surname>
          </string-name>
          (
          <year>2004</year>
          ).
          <article-title>Random Walks on Complex Networks</article-title>
          .
          <source>In Physical Review Letters</source>
          , Volume
          <volume>92</volume>
          , Number 11.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Shravas</surname>
            <given-names>K Rao</given-names>
          </string-name>
          , Finding Hitting Times in Various Graphs.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>M.E.J. Newman</surname>
          </string-name>
          (
          <year>2005</year>
          ).
          <article-title>A measure of betweenness centrality based on random walks</article-title>
          .
          <source>In Social Networks 27</source>
          , pp.
          <fpage>39</fpage>
          -
          <lpage>54</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>