<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>An Algorithmic Way to Generate Simplexes for Topological Data Analysis</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Krzysztof Rykaczewski</string-name>
          <email>krykaczewski@gmail.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Piotr Wiśniewski</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Krzysztof Stencel</string-name>
          <email>stencel@mimuw.edu.pl</email>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>DLabs</institution>
          ,
          <addr-line>Lęborska 3B, 80-230 Gdańsk</addr-line>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Faculty of Mathematics and Computer Science, Nicolaus Copernicus University</institution>
          ,
          <addr-line>Chopina 12/18, 87-100 Toruń</addr-line>
          ,
          <country country="PL">Poland</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Faculty of Mathematics, Informatics and Mechanics University of Warsaw</institution>
          ,
          <addr-line>Banacha 2, 02-097 Warsaw</addr-line>
          ,
          <country country="PL">Poland</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>In this article we present a new algorithm for creating simplicial Vietoris-Rips complexes that is easily parallelizable using computation models like MapReduce and Apache Spark. The algorithm does not involve any computation in homology spaces.</p>
      </abstract>
      <kwd-group>
        <kwd>data analysis</kwd>
        <kwd>topology</kwd>
        <kwd>simplicial complex</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>
        The article of Silva and Grist [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] showed that the algebraic topology was a
very practical tool. In this paper its authors analyse the coverage of an area
with a network of sensors. This network is interpreted as a simplicial complex.
Its homology type is then computed and exploited. If this complex is homotopy
equivalent with point, the coverage is complete. Therefore, the authors translated
a technical problem into a mathematical problem.
      </p>
      <p>
        In the article [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] Grist searches for topology structures in big data sets. Again,
this results in building simplicial complexes and analysing them.
      </p>
      <p>The abovementioned works inspire to ask the question how to build simplicial
complexes efficiently and how to analyse them. In this article w propose a novel
algorithm to build the simplicial complex. This algorithm has the unique
property that it can be implemented within massive parallel computational models
like MapReduce and Apache Spark.
have no insight into how the data actually look like and how it is embedded, but
it would be enough to know their “shape” to say something about how it is are
arranged in the original space.</p>
      <p>
        In recent papers of Carlsson and collaborators [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] presented ideas to explore
some properties of high-dimensional data sets. They use algebraic topology tools
to find certain characteristics of the data or its simpler representation. This
approach allows to discover objects and features that are immersed in
highdimensional space.
      </p>
      <p>We start presentation of basic facts from the notion of a simplex. Colloquially
speaking, simplex is a generalization of a segment, a triangle and a tetrahedron
to any dimension. More formally, suppose the k + 1 points p0; : : : ; pk 2 Rk are
affinely independent, which means p1 p0; : : : ; pk p0 are linearly independent.
Then, k-simplex determined by them is the set of points</p>
      <p>(
:=
0p0 +
+ kpk j i
0; 0
i</p>
      <p>k )
k; X i = 1 ;
i=0
(1)
i.e. k-simplex is a k-dimensional polytope which is the convex hull of its k + 1
vertices. We often write = [p0; : : : ; pk] for short. The points p0, : : :, pk are called
vertices of the k-simplex. Each simplex is uniquely determined by its vertices.
Any subset of vertices constitutes simplex with dimension smaller that the whole
one, called a face of simplex.</p>
      <p>We say two vertices v and w are neighbors if there is 1-simplex (edge)
connecting them.</p>
      <sec id="sec-1-1">
        <title>Definition 1. A simplicial complex K is a set of simplexes that satisfies the</title>
        <p>following conditions:</p>
      </sec>
      <sec id="sec-1-2">
        <title>1. Any face of a simplex from K is also in K.</title>
      </sec>
      <sec id="sec-1-3">
        <title>2. The intersection of any two simplexes 1, 2 2 K is either ; or a face of</title>
        <p>both 1 and 2.</p>
        <p>Having data set P , we can use it to build a variety of complexes. We shall
now formulate two most frequent definitions.</p>
        <sec id="sec-1-3-1">
          <title>Definition 2. For a discrete set of points P</title>
          <p>define the Cˇ ech complex of radius as Cˇ(P; ) by
Rd and a parameter
&gt; 0, we
Cˇ(P; ) :=
(
[p1; p2; : : : ; pk] j fp1; p2; : : : ; pkg</p>
          <p>P;
\ B(pi; =2) 6= ?
i
)
;
(2)
where B(p; ) is the closed ball of radius
centered at p.</p>
          <p>The nerve theorem states that Cˇ ech complex Cˇ (P; ) has homotopy type of
the union of closed balls with radius =2 around data P This means that the
Cˇ ech complex gives faithful representation of thickened data (they are common
”shape”).</p>
          <p>Vietoris-Rips complex is closely related to the Cˇ ech complex.</p>
        </sec>
        <sec id="sec-1-3-2">
          <title>Definition 3. For a given &gt; 0, the Vietoris-Rips complex (later called Rips</title>
          <p>complex) R(P; ) is the largest simplicial complex that shares the same 1-skeleton
(graph) as the Cˇech complex Cˇ(P; ). More explicitly,</p>
          <p>R(P; ) := f := [p1; p2; : : : ; pk] j fp1; p2; : : : ; pkg
P; diam( )
g ;
(3)
where diam( ) is the diameter of simplex .</p>
        </sec>
        <sec id="sec-1-3-3">
          <title>Lemma 1 (Vietoris-Rips). For every finite set of points P</title>
          <p>Rd and r
Cˇ(P; )</p>
          <p>R(P; )</p>
          <p>
            Cˇ(P; 2 ):
Thus, if the Cˇ ech complex at the same time for and 2 approximates the data
in a good way, then Vietoris-Rips complex do it as well. This estimate can be
further improved [
            <xref ref-type="bibr" rid="ref2">2</xref>
            ].
          </p>
          <p>Remark 1. Cˇ ech complex is difficult to calculate, but it is quite small. However,
Vietoris-Rips complex is easy to calculate, but is usually very big. Both Cˇ ech
0,
(4)
and Vietoris-Rips complexes can produce simplicial complex of dimension greater
than the considered space. Therefore, there are considered many alternatives for
them: Delaunay Complex, Alpha Complex, (Lazy) Witness Complex, Mapper
Complex, Sparsified Rips complex, Graph Induced Complex (GIC).</p>
          <p>For small &gt; 0 we have disjoint set of balls, whereas for big the covering by
balls is simply connected (without holes), so we are totally losing the knowledge
about the data structure. Since (in general) we do not know which radius
to take, we consider them all and obtain in this way a filtration of simplicial
complexes, i.e.</p>
          <p>Cˇ (P; 1)</p>
          <p>Cˇ (P; 2)
: : :</p>
          <p>
            Cˇ (P; n);
(5)
for 0 &lt; 1 2 : : : n. With the change of radius also changes the topology
of the data: some holes are created, and some disappear. Roughly speaking, those
holes that last longest represent the shape of data. It is usually represented in
the form of the so-called barcode [
            <xref ref-type="bibr" rid="ref3">3</xref>
            ].
          </p>
          <p>Let us recall the following definition from topology.</p>
        </sec>
        <sec id="sec-1-3-4">
          <title>Definition 4. Two topological spaces X and Y are called homotopy equivalent if</title>
          <p>there exist continuous maps f : X ! Y and g : Y ! X, such that the composition
f g is homotopic (i.e. it can be deformed in a continuous way) to the identity
idY on Y , and g f is homotopic to idX .
3</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-2">
      <title>Related Work</title>
      <p>Data anaylsis is a process of modeling data in order to discover useful
information. Unfortunately, mining high-dimensional data is very challenging. Therefore,
a number of methods was created so as to make it easier for researchers. One of
the youngest but rapidly growing field now is topological data analysis (TDA).</p>
      <p>
        TDA is an approach to analyse data sets using techniques from topology. This
method relies on calculation of some properties of the space/data, which
maintain (are invariant) under certain transformations. Although most of topological
invariants are difficult to calculate, there is an algorithm which can calculate
homology groups and barcodes [
        <xref ref-type="bibr" rid="ref4 ref5">4, 5</xref>
        ]. These groups are often used in
applications [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ].
      </p>
      <p>
        Topological methods due to their nature can be used to cope with many
problems where traditional methods fail. It has broad application in coverage
problem in sensor network [
        <xref ref-type="bibr" rid="ref2 ref3">2, 3</xref>
        ], detecting breast cancer [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], computer vision [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ],
detection of periodicity in time series (behaviour classification) [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] etc.
      </p>
      <p>
        Another issue addressed in this subject is the speed of algorithms. In paper [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]
Dłotko et al. presented distributed algorithm for homology computation over a
sensor network. Their technique involve reduction and coreduction of simplicial
complexes.
      </p>
      <p>
        Regardless of the algebraic approach, there are created algorithms that do
not use calculations on homology groups to obtain some approximation of the
shape of data [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. Algorithm proposed in the last paper can have problems with
more complicated structures, like the sphere, but it is shown that it does well
with coverage in sensor networks.
      </p>
      <p>
        Notice that apart from using Cˇ ech and Rips complexes there are other
directions in computational topology, but we do not discuss them here [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ].
4
      </p>
    </sec>
    <sec id="sec-3">
      <title>A Method to Build Complex using MapReduce</title>
      <p>In this Section we present an efficient method to build Vietoris-Rips complexes.
We make the following assumptions:
– R is a Euclidean space.
– A fixed number of dimensions n is given.
– M Rn is the set of input data points.
– We assume that each input data point has a unique identifier id (x). Those
identifiers are items of an linearly ordered set.
– For any i 2 f1; 2; : : : ; ng we define i : Rn ! R to be the projection to the
i-th dimension.
– In Rn we assume the Euclidean distance. For any two x; y 2 Rn let jx; yj
denote the Euclidean distance of x and y.
– Let us choose &gt; 0. We are going to build the Vietoris-Rips complex R(M; )</p>
      <p>The first step to build a Vietoris-Rips complex is to find all pair of points
x; y 2 M such that jx; yj &lt; . If the set M is large, computing distances between
all pairs of points is too time consuming. If the number of dimensions n is equal to
1, we can assign all points to segments of the form hk ; (k + 1) ). Then, instead of
computing distances for all pairs, we consider only those pairs of points that fall
into the same segment or two neighbouring segments. As the result, each point
is paired only with points from three segments. This can significantly accelerate
the computation. Of course, in a one-dimensional space, a simpler method that
encompasses sorting and sequential scanning will be faster. However, this method
cannot be extended to more dimensions.</p>
      <p>For a two-dimensional space we can apply a method similar to the former
one described above. Instead of segments, we have then squares with sides of
length . Eventually, each point will be paired for distance computation only
with points from nine other squares. The three-dimensional case is similar, but
we use cubes with side of length and each point is paired with points from 27
cubes.</p>
      <p>Unfortunately, enormous growth of the number of dimensions prohibits direct
usage of this method. In case of a 100-dimentional space, the “segments” will be
100-dimensional hypercubes with -side. Each point will be paired for distance
computation with points from 3100 hypercubes. Obviously the na¨ive method will
amount to be better for sure in this case.</p>
      <p>However, we can apply the abovementioned three-dimensional method in
multi-dimensional spaces indirectly. If we choose three dimensions i; j; k, we can
consider projecting the whole space to them and further search for close pairs like
in a three-dimensional space. Let us consider the projection = i j
Rn ! R3. Obviously for each pair x; y 2 Rn the following inequality holds:
k :
j (x); (y)j3
jx; yj;
3
By j : : : j3 we denote the Euclidean distance in R .</p>
      <p>We will assign all points to -sided cubes according to their projections onto
the chosen dimensions. Then, it is enough to pair each point for distance
computation only with points residing in the same cube or in 26 neighbouring cubes. If
two points are not assigned to the same cube or neighbouring cubes, the distance
of their projections onto the chosen dimensions is larger than . Therefore, so is
their distance in Rn and their pair does not have to be considered.</p>
      <p>
        The efficiency of this method notably depends on the choice of the three
projected dimension. Chosen dimensions may reduce the number of distance
computations required. Our first idea was to compare the projection onto each single
dimension with the uniform distribution using the algorithm based on Kullback–
Leibler divergence [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]. However, this approach required two scans of input data.
Moreover, it is hard to distinguish more dense and more sparse distributions.
The problem lies rather in sparseness (with respect to ) that uniformity. As the
result, we invented the following quality measure of dimensions.
      </p>
      <p>For a given dimension i and an integer t we define:
(i; t) = jfx 2 M; t
i(x) &lt; (t + 1) gj
The number (i; t) tells how many points will fall into the segment identified by
t if we project to the dimension i. Note that if a point x satisfies the condition
in the definition above, then t = b i(x)= c. bac is the largest integer not greater
than a.</p>
      <p>For a dimension i 2 f1; 2; : : : ; ng and an input data point x 2 M we define
ti(x) to be the integer:</p>
      <p>ti(x) = b i(x)= c:</p>
      <p>Observe that the computation of all non-zero values of is possible using
only a single scan of input data. Moreover, it is easily implementable in the
MapReduce computation model. The mapper emits hi; ti(x)i for each input data
point x 2 M and for each dimension i. The reducer simply counts the number
of occurrences of pairs hi; ti.</p>
      <sec id="sec-3-1">
        <title>Non-dispersion measure of a dimension i is the number</title>
        <p>i = X
t2Z</p>
        <p>(i; t)2</p>
        <p>Now, for each dimension we compute its non-dispersion measure by means
of the MapReduce procedure presented above. Then we chose three dimensions
with smallest non-dispersions.</p>
        <p>Assume i; j; k to be the three dimensions with smallest non-dispersion. The
algorithm to find all pairs of points closer that using these dimensions can be
easily parallelized using the MapReduce computation model.</p>
        <p>The mapper reads all input data points and for each point x emits 27
keyvalue pairs:</p>
        <p>hht1; t2; t3i; xi
where t1; t2; t3 satisfy the following conditions:
t1 2 f ti(x)
t2 2 f tj (x)
t3 2 f tk(x)
1; ti(x); ti(x) + 1 g
1; tj (x); tj (x) + 1 g
1; tk(x); tk(x) + 1 g</p>
        <p>A single reducer collects all pairs that came with a given key ht1; t2; t3i. If a
pair x; y satisfies the conditions id (x) &lt; id (y), t1 = ti(x), t2 = tj (x), t3 = tk(x),
then the distance of points x; y is computed. The three dimension equalities
assure that x lies in the very -sided cube ht1; t2; t3i. If jx; yj &lt; , the pair hx; yi
is added to the result.</p>
        <p>The collection of all pairs hx; yi that satisfy jx; yj &lt; is the most
timeconsuming phase of the construction of the Vietoris-Rips complex. Those pairs
are single-dimensional simplexes. The second phase computes triplets hx; y; zi
such that the three points are pairwise closer than and id (x) &lt; id (y) &lt; id (z).
Following phases consist in identifying longer and longer lists of points that
satisfy analogous conditions. It can be done as in the Apriori algorithm to find
frequent subsets, because if all points in a set are closer than that the same
must hold for all its subsets. If is not too big, these phases are not
timeconsuming. However, it is much simpler to build higher-dimensional simplexes
using the following lemma.</p>
        <p>Lemma 2 (Raising the dimension of simplexes). Let R be a Vietoris-Rips
complex. A simplex a := hx1; :::; xn; xn+1; xn+2i belongs to R if and only if all
the three following simplexes are also in R:
b := hx1; :::; xn; xn+1i
c := hx1; :::; xn; xn+2i
d := hxn+1; xn+2i
Proof. The “only if” part is straightforward. If a 2 R, then all its subsimplexes
obviously belong to R. Thus, so do b; c; d.</p>
        <p>The proof of the “if” part is based on the following property of Vietoris-Rips
complexes. If a simplex a belongs to R, then all its one-dimensional subsimplexes
also belong to R. Assume a one-dimensional simplex e := hxj ; xki a, where
j; k 2 f1; : : : ; n + 2g. We will show that e 2 R.</p>
        <p>We have to consider three cases. Firstly, if j = n + 1 ^ k = n + 2, then
e = d 2 R. Secondly, if j &lt; n + 1 ^ k = n + 2, then e c 2 R, thus e 2 R.
Thirdly, if k &lt; n + 2, then j &lt; n + 1, thus e b 2 R and also e 2 R.</p>
        <p>Since the choice of j and k is arbitrary, the “if” part has been proven. So has
been the whole lemma.
tu</p>
        <p>As the result of the first phase of the algorithm, we have a collection pairs
that contains all pairs hxi; xj i such that
jxi; xjj &lt;</p>
        <p>^ id j &lt; id k
We recall that id i is the identifier of the point xi.</p>
        <p>In the second phase of the algorithm, we build collections of simplexes of
subsequent dimensions n that belong to the Vietoris-Rips complex R. The algorithm
stops when for a given n we get the empty list of simplexes.</p>
        <p>For a given n, we search for all b, c and d that satisfy the constraints of
Lemma 2. The symbols introduced in Lemma 2 will prove to be handy again.
Therefore, we perform the following steps:
1. We convert the collection of simplexes hx1; x2; : : : ; xn+1i of the dimension n
to the collection of pairs hhx1; x2; : : : ; xni; xn+1i.
2. We compute a natural self-join of this set of pairs using the first
coordinate. As the result we get the set of all triplets: hhx1; x2; : : : ; xni; xj; xki
(a as in Lemma 2) such that there are pairs hhx1; x2; : : : ; xni; xji (b) and
hhx1; x2; : : : ; xni; xki (c) in the previous step.
3. We leave (select) only those triples that satisfy id j &lt; id k.
4. We equi-join the remaining triplets with the set of all pairs using the last two
coordinates of the triplets. We do it to assure that the last two coordinates
of triplets (d) form a one-dimensional simplex belonging to the complex.
5. We flatten remaining triplets to get complexes of the dimension n + 1 of the
form hx1; x2; : : : ; xn; xj; xki.</p>
        <p>
          Note, that this procedure contains only operations that are expressible in
relational algebra (or calculus). Therefore, the whole algorithm is formulated in
a highly abstract language that is known to be easy to optimise and execute in
parallel. We exploited this possibility and implemented it in the Apache Spark for
efficient execution on large computing infrastructures of commodity computers.
An example Python code that implements the whole algorithm is shown below.
komplex = []
komplex.append(points.map(lambda p: p[0]))
komplex.append(pairs)
# initial
base = pairs
pairsAsKey = pairs.map(lambda p: (p, 1))
noAnylizedObjects = pairs.count()
#iterated
while noAnylizedObjects &gt; 0:
base = base.map(lambda a:(a[:-1], a[-1]))
base = base.join(base) \
.filter(lambda a: a[
          <xref ref-type="bibr" rid="ref1">1</xref>
          ][0] &lt; a[
          <xref ref-type="bibr" rid="ref1">1</xref>
          ][
          <xref ref-type="bibr" rid="ref1">1</xref>
          ]) \
.map(lambda a: (a[
          <xref ref-type="bibr" rid="ref1">1</xref>
          ] , a[0])) \
.join(pairsAsKey) \
.map(lambda a:rigthTupleSum(a[
          <xref ref-type="bibr" rid="ref1">1</xref>
          ][0], a[0]))
noAnylizedObjects = base.count()
if noAnylizedObjects &gt; 0:
        </p>
        <p>komplex.append(base)
5</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Conclusions</title>
      <p>In this article we have presented an efficient method to build Vietoris-Rips
complexes. Moreover, this method is easily parallelizable using the MapReduce
computation model or Apache Spark. Further research will focus on algorithms that
reduce constructed complexes. It will facilitate finding homology types of
complexes. As a result it will also allow assessing of the correctness of the selection
of the value.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Carlsson</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ishkhanov</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>De Silva</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zomorodian</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>On the local behavior of spaces of natural images</article-title>
          .
          <source>International journal of computer vision 76</source>
          (
          <year>2008</year>
          )
          <fpage>1</fpage>
          -
          <lpage>12</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>De Silva</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ghrist</surname>
          </string-name>
          , R.:
          <article-title>Coverage in sensor networks via persistent homology</article-title>
          .
          <source>Algebraic &amp; Geometric Topology</source>
          <volume>7</volume>
          (
          <year>2007</year>
          )
          <fpage>339</fpage>
          -
          <lpage>358</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Ghrist</surname>
          </string-name>
          , R.:
          <article-title>Barcodes: the persistent topology of data</article-title>
          .
          <source>Bulletin of the American Mathematical Society</source>
          <volume>45</volume>
          (
          <year>2008</year>
          )
          <fpage>61</fpage>
          -
          <lpage>75</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Edelsbrunner</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Letscher</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zomorodian</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Topological persistence and simplification</article-title>
          .
          <source>Discrete and Computational Geometry</source>
          <volume>28</volume>
          (
          <year>2002</year>
          )
          <fpage>511</fpage>
          -
          <lpage>533</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Zomorodian</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Carlsson</surname>
          </string-name>
          , G.:
          <article-title>Computing persistent homology</article-title>
          .
          <source>Discrete &amp; Computational Geometry</source>
          <volume>33</volume>
          (
          <year>2005</year>
          )
          <fpage>249</fpage>
          -
          <lpage>274</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Nicolau</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Levine</surname>
            ,
            <given-names>A.J.</given-names>
          </string-name>
          , Carlsson, G.:
          <article-title>Topology based data analysis identifies a subgroup of breast cancers with a unique mutational profile and excellent survival</article-title>
          .
          <source>Proceedings of the National Academy of Sciences</source>
          <volume>108</volume>
          (
          <year>2011</year>
          )
          <fpage>7265</fpage>
          -
          <lpage>7270</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Freedman</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chen</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          :
          <article-title>Algebraic topology for computer vision</article-title>
          .
          <source>Computer Vision</source>
          (
          <year>2009</year>
          )
          <fpage>239</fpage>
          -
          <lpage>268</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Vejdemo-Johansson</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pokorny</surname>
            ,
            <given-names>F.T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Skraba</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kragic</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          :
          <article-title>Cohomological learning of periodic motion</article-title>
          .
          <source>Applicable Algebra in Engineering, Communication and Computing</source>
          <volume>26</volume>
          (
          <year>2015</year>
          )
          <fpage>5</fpage>
          -
          <lpage>26</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Dłotko</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ghrist</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Juda</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mrozek</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <article-title>Distributed computation of coverage in sensor networks by homological methods</article-title>
          .
          <source>Applicable Algebra in Engineering, Communication and Computing</source>
          <volume>23</volume>
          (
          <year>2012</year>
          )
          <fpage>29</fpage>
          -
          <lpage>58</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Ćwiszewski</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wiśniewski</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          :
          <article-title>Coverage verification algorithm for sensor networks</article-title>
          . In:
          <article-title>Computer Applications for Bio-technology,</article-title>
          <string-name>
            <surname>Multimedia</surname>
          </string-name>
          , and Ubiquitous City. Springer (
          <year>2012</year>
          )
          <fpage>397</fpage>
          -
          <lpage>405</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Amari</surname>
            ,
            <given-names>S.I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cichocki</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yang</surname>
            ,
            <given-names>H.H.</given-names>
          </string-name>
          , et al.:
          <article-title>A new learning algorithm for blind signal separation</article-title>
          .
          <source>Advances in neural information processing systems</source>
          (
          <year>1996</year>
          )
          <fpage>757</fpage>
          -
          <lpage>763</lpage>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>