<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>DistrDibiustterdibauntdeGd eannerdalGGeanloeirsaLlaGttiacleoFisorLLaatrtgiceeData For LargBeaDseas ta Bases</article-title>
      </title-group>
      <contrib-group>
        <aff id="aff0">
          <label>0</label>
          <institution>CECREIRAIALaLbaboorarattoorryy,, ParriissDDaauupphihnienUenUivneivrseirtysity PlacPeladcue dMuaMerc ́ahraéclhdael dLeaLttartetrededeTTaassssiiggnnyy</institution>
          ,
          <addr-line>7755777755PPaarirsiscecdeedxex161,6F,raFnrcaence</addr-line>
        </aff>
      </contrib-group>
      <fpage>197</fpage>
      <lpage>206</lpage>
      <abstract>
        <p>Large data processing is an essential problem in many data mining application. Our work explores the algorithms calculating a Generalized Galois Lattice (G2L) over a large collection of data. A G2L contains all the so-called closed sets of a collection of tuples (individuals characterized by a set of properties), G2Ls generalize to multivalued data the GL already popular in the concept analysis using the binary values. They appear an attractive tool for data mining. The G2L or GL calculus is CPU intensive. In practice, the current techniques limit the approach only to small sets of data only, e.g., a hundred tuples with a few dozens of properties each. Our research consists in building scalable distributed G2L calculus algorithms. We think this research direction promising and probably the only way towards our goal. We first have implemented and analyzed some centralized algorithms. One termed ELL seems to be the most efficient. We have defined therefore a scalable distributed version of ELL termed SD-ELL. It recursively partitions the set of tuples for the G2L computation over sufficiently many sites to let ELL execute fast enough at each site. The closed sets produced by each site enter a common scalable distributed data structure (SDDS). We then plan to test the resulting behavior and analyze the performance of the algorithm and of some promising variants.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1 Introduction</title>
      <p>
        A Galois Lattice (GL) allows extracting concepts and rules from other concepts.
Several algorithms determine the GL over a given context C, provided the size of C is
small, [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. These algorithms apply to binary data. Their generalization, G2L
was proposed in [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. An algorithm for G2L calculus termed ELL, and some alternative
ones, were subsequently defined in [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. These algorithms, as any previous for a GL
computation, were designed for the single-site computations. It was known that they
could run in reasonable time only for small sets, e.g., at most of a few hundreds of
tuples.
      </p>
      <p>Further analysis has shown that the most promising way towards larger sets of data,
the only of interest to the data mining, was to find some scalable distributed variant of
G2L calculus. This goal becomes the subject of our work.</p>
      <p>We have defined therefore a scalable distributed version of ELL termed SD-ELL. It
recursively partitions the set of tuples for the G2L computation over sufficiently many
sites to let ELL execute fast enough at each site. The closed sets produced by each site
enter a common scalable distributed data structure (SDDS). We then plan to test the
resulting behavior and analyze the performance of the algorithm.</p>
      <p>
        The rest of the paper is organized as follows. In Section 2, we recall the definition of a
G2L. Section 2 recalls the main idea in ELL. Section 3 overviews the principles of our
scalable distributed extension, exploring the recursive G2L calculation in [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. Section
4 concludes this paper by listing the future research direction.
      </p>
    </sec>
    <sec id="sec-2">
      <title>2 General Galois Lattices</title>
      <p>
        Concept lattice [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] and Closed itemset lattice are based on order theory and lattice
theory [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. They are used to represent the order relation on concepts or closed
itemsets. Concept lattice describes the character of the set pair: intent and extent of
concept. And the general of Galois Lattice formalism was addressed by [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ], [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ].
In this section, we define some notions generalizing usual ones: Data context, Closure
operator, Closed itemset, etc.
      </p>
      <p>Definition 1. A lattice is a mathematical structure F = &lt; F, , , , 0F, 1F &gt;, where F
is a partially ordered set by the relation , with the largest element 1F, a smallest
element 0F, and , are internal composition laws of sup (or supremum), and inf (or
infimum).</p>
      <p>In many situations F is the Cartesian product of several lattices Fj = &lt;Fj, j, j, j,
0Fj,1Fj&gt;, for j J = {1,..., n}. We write this F=F1 x...x F x...x Fn.</p>
      <p>The relation on F is defined by z = (z1, ...,zj, ...,zn) t = (t1, ...,tj, ...,tn) iff zj j tj for
each j of J . We note z t = (..., zj j tj,....), z t = (..., zj j tj, .... ), OF = (....,OFj, ...
), 1F = (..., 1Fj, ). (For standard Galois lattices, we have for each j: Fj = {0, 1}, 0 &lt; 1,
0 j 0 = 0, 0 j 1 = 1 j 0 = 1 j =1, 0 j 0 = 0 j 1 = 1 j 0 = 0, 1 j 1 = 1. So OF = (....,
0, ...), and 1F = (..., 1,...)).</p>
      <sec id="sec-2-1">
        <title>Definition 2. General contexts and descriptions:</title>
        <p>Let m be a finite positive integer, I = {1,..., m} = [1, ,m], and F any lattice. Let
d:I F be any mapping from I to F. A context C is defined as an array with rows d (i),
i = 1, ..., m.</p>
        <p>Formalism: The context provides, for each individual or object i of I, its description
d(i)=(d1(i), d2(i), , dj(i),..., dn(i)) F = F1 x x Fn, according to the attributes or
properties j J. (In the standard case, dj(i)=1 means that i has property j, and dj(i) =
0 means that i has not the property j). So, in the general case, C is an m x n array of
elements dj (i) of F, and for each individual i and each property j, dj (i) is the value of
this property for i.</p>
      </sec>
      <sec id="sec-2-2">
        <title>Definition 3. Galois connection:</title>
        <p>Let C = &lt;I, F, d&gt; be a context. We define E = P(I), |E|=2l and f: E F as follows: for
each subset X of I, f(X) = d(i): i X} if X , and f( ) = 1F.</p>
        <p>So, f(X) is the infimum of the descriptions d(i) of elements i of X. And in the standard
case f(X) is an element z = (z1,..., zj, ...) of F, and zj = j {dj(i) : i X} = 1, iff
dj(i)=1, for each i of X. This means that zj = 1 iff j is a property which belongs to all
i in X. For this reason, we call f (X) the intent of X.</p>
        <sec id="sec-2-2-1">
          <title>Remark:</title>
          <p>For each i of I, we have f({i}) = d(i), and f is decreasing (if X X I then f(X )
f(X)).</p>
          <p>We define g: F E by g(z) = {i I: z d(i)}, for each z of F.</p>
          <p>We say that g(z) is the extent of z. (In standard case, g(z) is the set of all individuals
who have all properties of z, zj = 1).</p>
          <p>We can see that g is also a decreasing mapping.</p>
          <p>The ordered pair (f, g) is called a Galois connection. From it we derive two other
mappings:
h : P(I) P(I), by h = g f, and k = F F by k = f g.</p>
          <p>So, for each subset X of I, we have h(X) = g(f(X)) = {i I: f(X) d(i)}, and for
each z of F, we have k(z) = f(g(z)) = {d(i): i g (z)}.</p>
          <p>We can see that h and k are closure operators. This means that each of them is an
increasing, extensive, and idempotent operator. More explicitly, for each X, X of E,
and z, z of F:</p>
          <p>X X implies that h(X) h(X ), z z implies that k(z) k(z );
X h(X), z k(z);
h(h( X)) = h(X), k(k( z)) = k(z) .</p>
          <p>Any subset X of I such that X = h(X) is called a I-closed set, and each z of F such that
z = k(z) is called F-closed element.</p>
          <p>Let us define H = {X I: h(X) = X} the set of all closed subsets of I, and K = {z F:
z = k(z)}, the set of all closed elements of F.</p>
          <p>One can proof that there is a bijective mapping between H and K. The ordered pairs
(X, z) H x K such that f(X) = z, and therefore such that g(z) = X, are called the
concepts associated with the context C.</p>
          <p>The set of all such concepts constitutes the Galois lattice GL(C) associated with this
context C. (the order relation on GL(C) is defined by (X, z) (X, z ) iff X X and
z z.)</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3 ELL algorithm</title>
      <p>
        This algorithm was proposed in [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. We have subsequently improved it and compared
to some other algorithms [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. This comparison has shown better performance of ELL.
This motivated our choice for this algorithm as a single-set G2L computation.
      </p>
      <p>Main idea:
For two disjoint subsets X0 and K of the set of objects I, let ELL (X0, K) denote a
procedure which lists all the closed sets of I obtained by extending X0 with some
elements of K.</p>
      <p>In other words, ELL (X0, K) lists all the closed sets, which strictly contain X0 and are
contained in X0 U K. Obviously, ELL (Ø, I) lists all the non-empty closed sets of I.
ELL (X, K) proceeds by dichotomy as follows:
If (K Ø)</p>
      <p>Choose an element i0 K
Find the closed sets which contain i0</p>
      <p>Find the closed sets which do not contain i0</p>
      <p>Endif
The key point of the proposed algorithm is that the search time of such closed sets is
considerably reduced by using the following proposition.</p>
      <p>Proposition:
Let X0 and K Ø be two disjoint subsets of I. Let i0 K.</p>
      <p>We have: h(X0 U {i0}) = X0 U A where A = {i I\X0: f(X0) f(i0) f(i)}
If a closed set contains X0 and i0, then it also contains A. Hence, if A K then X0UA
is the smallest closed set containing X0 and i0 and contained in X0 U K.
If a closed set contains X0 and does not contain i0, then it also does not contain any
element of the set: R = {i K: f(X0) f(i) f(i0)}.</p>
      <p>
        A (vs. R) is used for Attraction (vs. Rejection). The proof of this proposition is in [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ].
The following pseudo-code is a recursive version of the algorithm.
      </p>
      <sec id="sec-3-1">
        <title>Procedure ELL (X0, K)</title>
        <p>GL = Ø // GL is a list of concepts
Var i0: element of I, z, z0: elements of F; X, A, R: subsets of I;
begin
z0 = f(X0);
if K Ø then
begin
Choose an element i0 of K;
z = z0 f(i0); A = {i I\X0: z f(i)};
if A K then
begin</p>
        <p>X = X0 U A; insert node (X, z) in GL;
ELL (X, K\A);
end;</p>
        <p>R = {i K: z0 f(i) f(i0)}; ELL (X0, K\R);
end
end
The procedure ELL (Ø, I) starts with any i0 I (I is non empty). Then it determines
the set A. Observing that i0 A, we have the strict inclusions Ø X and I\A I.
Hence if A K, ELL (X, I\A) will run with a strictly smaller second parameter. Since
i0 R, we see that the same holds for ELL (X, I\R).</p>
        <p>More generally ELL (X, K\A) and ELL (X0, I\R) run with a strictly smaller second
parameter than that of ELL (X, K). Since I is finite, this parameter which is a subset
of I, will reach the void set and the procedure will terminate.</p>
        <p>This algorithm lists all closed sets without duplicates. Let us show that each closed set
F occurs exactly once.
Starting with GL = Ø, X0 = Ø and K = I, let i0 I be fixed and consider two cases:
i0 F and i0 F. If i0 F then</p>
        <p>Either F is the smallest closed set which contains i0. Then according to the
previous proposition (a), F=X=X0 U A and F will be listed by ELL (X0, K),</p>
        <p>Or F is not the smallest closed set which contains i0. In this case X F and F
will be listed by ELL (X, K\A).</p>
        <p>If i0 F, then according to the previous proposition (c), F will be listed by
ELL(X0,K\R). Since each insert only concerns the unique smallest closed set
containing X0 and i0, we see that F occurs exactly once.</p>
        <p>The only case not treated by the algorithm is whether the empty set is closed or not.
The algorithm yields all the nodes (X, z) of the GL such that X Ø. Since f(Ø) = 1F
and h(Ø) ={i I: 1F f(i)} = {i I : f (i) = 1F}, Ø is closed iff there is no i I such
that f(i) = 1F.</p>
        <p>2</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4 Scalable Distributed G L calculus</title>
      <p>
        We proposed a first solution in [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] which describes a way to parallelize the large
contexts, by sharing a context C into sub-contexts depending on its rows or columns.
Galois lattices associated with these sub-contexts are built, with the ELL, in different
computers and then the global lattice is determined from these lattices. The algorithms
used for the build of this global lattice are detailed in [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. This solution was
successfully tested. The study has shown that the time complexity appeared excessive
for larger sets we have wished.
      </p>
      <p>We have designed therefore a new solution to parallelize the work. It is based on a
new algorithm, termed SD-ELL. The key idea is a recursive application of ELL.</p>
    </sec>
    <sec id="sec-5">
      <title>4.1 SD-ELL algorithm</title>
      <p>SD-ELL is composed of two algorithms. The first one computes the sets of objects Ki
( i={1, , n}, n is the number of the computed sets Ki) and the second one is
duplicated on different machines to compute all the closed sets corresponding to each
Ki.</p>
      <p>The algorithm presented in the following table permits computing the sets Ki. It
proceeds as follows:</p>
      <p>Choose an element i0 K
Compute the sets A and R
Verify if A K then (X, Z) is a closed itemset and Ki = K\A</p>
      <p>Ki+1 = K\R
The previous steps are reused with the parameters Xi and Ki. The process of
decomposing Ki is stopped if |Ki| is smaller than a fixed threshold or the total number
of machine, in which the closed itemsets corresponding to each Ki are computed, is
used.</p>
      <sec id="sec-5-1">
        <title>Procedure KCompute(X0,K0,z0)</title>
        <p>LF = Ø; List of (K, X, z)
t = 1, q = 1, Xt = Ø, Kt = I, zt = 1F.
while ((t q) and ((q - t + 1) site number))
begin
X0 = Xt; K0 = Kt; z0=zt
Choose an element i0 of K
z = z0 f (i0);
A = {i I\X0: z f(i)};
R = {i K: z0 f(i) f(i0)};
if A K0 then
begin</p>
        <p>
          X = X0 A; z = z0 f(i0); K = K0\A;
LF = LF (X, z)
q=q+1; Xq = X; Kq = K; zq = z;
end
q = q + 1; Xq = X0; Kq = K0\R; zq = z0; t = t + 1;
end
The second algorithm constituting SD-ELL is the iterative version of ELL (I-ELL)
duplicated on different machines. This version is detailed in [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ] and [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ].
Let us use the context C from the example described below to illustrate how these two
algorithms work.
        </p>
        <p>O\A
1
2
3
4
5
6
a
12
12
10
13
17
0
b
16
12
13
14
10
14
c
6
8
16
11
10
3
d
10
12
14
10
14
8
e
12
11
11
13
13
4
f
13
13
8
13
10
13
g
12
11
12
12
14
11
h
6
8
10
14
12
10
For this example the threshold is 4 and the number of machine is 4.</p>
        <p>The dispatcher executes the first algorithm (KCompute). KCompute starts with the
parameters X0=Ø and K= I ={1,2,3,4,5,6}. An element i0 is chosen such as i0=1 and
then the sets A, K1, R, and K2 are computed.</p>
        <p>z = z0 f (i0) = {12,16,6,10,12,13,12,6}, A = {i I \ X0: z f (i)}={1}. Then A K:
X1=X0UA={1} and C2({1};{12,16,6,10,12,13,12,6}) is a concept.</p>
        <p>K1 = K\A = {2,3,4,5,6}; |K1| 4, K1 must then be decomposed.</p>
        <p>R = {i K: z0 f (i) f (i0)} = {1} K2 = K\R = {2,3,4,5,6}; |K2| 4, K2 must
also be decomposed.</p>
        <p>The previous steps are reused with the parameters (X1 = {1}, K1) and (X0=Ø, K2).
Decomposition of K1: Choosing i0 K1 such as i0 = 2.</p>
        <p>z = z0 f (i0) = {12,12,6,10,11,13,11,6}, A = {i
A</p>
        <p>K1:
The union of E1, E2, E3, E4, and the concepts calculated in the dispatcher is the set of
all concepts corresponding to the context C.</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>4.2 The Architecture</title>
      <p>
        A major problem for the final efficiency of SD-ELL is the storage of the collection of
the closed produced, presumably very large and of unknown size. Our tool is a
Scalable Distributed Data Structure (SDDS). An SDDS dynamically distributes over
the (server) nodes whose number dynamically scales with the file growth. The
application accesses the file through the client nodes making the data distribution
transparent. For scalability, the data address calculations do not involve any
centralized directory. The data are also basically stored in the distributed RAM, for
faster access than to the traditional disk-based structures [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. All these properties are
prime importance for our goal.
      </p>
      <p>Our first architecture for SD-ELL, illustrated below, calculus is to add to the SDDS
structure of clients and servers, a dispatcher and our applications at each available
client site. The dispatcher executes the first algorithm of SD-ELL. It then sends to the
applications the parameters Ki and Xi. In parallel, on each client the iterative version
is executed with its parameters. We obtain the closed itemsets that after the calculus
partition the Galois lattice. The closed sets are sent by the clients to the servers, where
they are saved.</p>
      <p>K1, X1</p>
      <p>Application
I-ELL(K1,X1) E1</p>
      <p>Client</p>
      <p>E1</p>
      <p>Dispatcher
(KCompute)</p>
      <p>Ki, Xi</p>
      <p>Application
I-ELL(Ki,Xi) Ei</p>
      <p>Client</p>
      <p>E
i</p>
      <p>Kn, Xn</p>
      <p>Application
I-ELL(Kn,Xn) En</p>
      <p>Client</p>
      <p>En
Server</p>
      <p>Server</p>
      <p>Server</p>
      <p>Server Name
Buckets in RAM</p>
      <p>Buckets in RAM</p>
      <p>Buckets in RAM
The dispatcher is a centralized component. To offset the potential drawbacks of this
approach, we have designed also a more distributive solution. The dispatcher
calculates only the set K1 and K2 (K1=K0\A and K2 =K0\R) and sends them to two
applications. These applications become dispatchers on there turn. They send to other
applications the parameters Ki and Xi if Ki is greater than a fixed threshold and there
are available machines.</p>
      <p>We will first implement and measure the 1st solution. Later we will also design the
end one and compare the related trade-offs.</p>
      <p>Our preliminary experimentations has shown that the processing time of the
distributed SD-ELL algorithm out performs the processing time of the ELL algorithm.
Besides, increasing the number of processors improves the processing time.</p>
    </sec>
    <sec id="sec-7">
      <title>5 Related work</title>
      <p>
        In [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ], Ndoundam and al. proposed a methodology which permits implementing a
parallel version of Bordat [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ] through the parallelization of nested loops.
Nevertheless, this parallelization is incompleted as proved by [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] how propose a
better fine-grain parallelization.
      </p>
      <p>
        By another way, [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ] present a parallel version of Ganter algorithm [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] for large
context which relies on a partitioning of the search space. Let us note that all these
previous works do not provide effective implementation of the parallel algorithms.
Only [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] propose a validation of their theoretical model through an effective
implementation.
      </p>
    </sec>
    <sec id="sec-8">
      <title>6 Conclusion</title>
      <p>
        In this paper, we propose a new algorithm (SD-ELL) which is a distributed version of
ELL, in order to deal with large databases. The architecture used for this distribution
allows to share the computation of closed sets of large databases on the different
applications and to store them in the server nodes. The storage of the great number of
closed sets is successful since the number of server nodes dynamically scales with the
growth of the closed itemsets file. Our ongoing research consists in continuing the
evaluating of the prototype performance measurements. A possible future research
direction consists in determining the links between the concepts and then comparing
our results with the results of [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ].
      </p>
    </sec>
    <sec id="sec-9">
      <title>Acknowledgements</title>
      <p>We would like to express special thanks to Pr. Witold Litwin for his precious remarks
and his help for using the SDDS.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Baklouti</surname>
            . F,
            <given-names>Lévy. G.</given-names>
          </string-name>
          <article-title>Parallel algorithms for general Galois lattices building</article-title>
          .
          <source>Proc. WDAS</source>
          <year>2003</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Baklouti</surname>
            . F,
            <given-names>Lévy. G.</given-names>
          </string-name>
          <article-title>A fast and general algorithm for Galois lattices building</article-title>
          .
          <source>Journal of Symbolic Data Analysis. Vol 3</source>
          . pp.
          <fpage>19</fpage>
          -
          <lpage>31</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Birkhoff</surname>
          </string-name>
          . G. Lattice Theory.
          <source>American Mathematical Society</source>
          , Providence, RI,
          <fpage>3rd</fpage>
          <lpage>edition</lpage>
          .
          <year>1967</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Bordat</surname>
          </string-name>
          . J.
          <article-title>Calcul pratique du treillis de galois d une correspondance</article-title>
          , Mathématique,
          <source>Informatique et Sciences Humaines</source>
          <volume>24</volume>
          (
          <issue>94</issue>
          ),
          <fpage>31</fpage>
          .
          <year>1986</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Diday</surname>
          </string-name>
          . E,
          <string-name>
            <surname>Emilion. R. Treillis de Galois</surname>
          </string-name>
          maximaux et capacités de Choquet. Cahier de Recherche de l Acadèmie des Sciences. Paris, t.
          <volume>325</volume>
          ,
          <string-name>
            <surname>Série</surname>
            <given-names>I</given-names>
          </string-name>
          , p.
          <fpage>261</fpage>
          -
          <lpage>266</lpage>
          ,
          <year>1997</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <given-names>Emilion. R.</given-names>
            <surname>Lambert</surname>
          </string-name>
          . G, Lévy. G, Algorithmes pour les treillis de Galois, IndoFrench Workshop, University Paris IX-Dauphine.
          <year>1997</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Ganter</surname>
          </string-name>
          . B.
          <article-title>Two basic algorithms in concept analysis</article-title>
          .
          <source>Preprint</source>
          <volume>831</volume>
          ,
          <string-name>
            <surname>Technische</surname>
            <given-names>Hochschule Darmstadt</given-names>
          </string-name>
          <year>1984</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Godin</surname>
          </string-name>
          . R.,
          <string-name>
            <surname>Mineau</surname>
          </string-name>
          . G. and al.
          <article-title>Méthodes de classification conceptuelle bases sur les treillis de Galois et application</article-title>
          . Revue d intelligence artificielle pp
          <fpage>105</fpage>
          -
          <lpage>137</lpage>
          .
          <year>1995</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Ganter</surname>
          </string-name>
          . B,
          <string-name>
            <surname>Wille. R. Formal</surname>
          </string-name>
          <article-title>Concept Analysis</article-title>
          .
          <source>Mathematical Foundations</source>
          , Springer.
          <year>1999</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Litwin</surname>
          </string-name>
          . W.,
          <string-name>
            <surname>Neimat</surname>
            ,
            <given-names>M-A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Schneider</surname>
            .
            <given-names>D.</given-names>
          </string-name>
          <article-title>A Scalable Distributed Data Structures</article-title>
          .
          <source>ACM Transactions on Database Systems (ACM-TODS)</source>
          ,
          <year>Dec</year>
          .
          <year>1996</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Valtchev</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Djoufak</surname>
            <given-names>Kengue</given-names>
          </string-name>
          ,
          <string-name>
            <surname>J</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            ,
            <surname>Djamegni</surname>
          </string-name>
          <string-name>
            <given-names>C</given-names>
            ,
            <surname>T.</surname>
          </string-name>
          ,
          <article-title>A parallel algorithm for lattice construction, in the proceding of the third ICFCA</article-title>
          ,
          <year>February 2005</year>
          , Lens, Fance. pp
          <fpage>249</fpage>
          -
          <lpage>264</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Ndoundam</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Njiwoua</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mephu Nguifo</surname>
          </string-name>
          , E. Une etude comparative de la parallelisation d algorithmes de construction de treillis de Galois. Atelier francophone de la plate forme de l AFIA.
          <article-title>Usage des treillis de Galois pour l intelligence artificielle</article-title>
          ,
          <source>ESIEA Recherche</source>
          (
          <year>2003</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Njiwoua</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mephu Nguifo</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          <article-title>A parallel algorithm to build concept lattice</article-title>
          .
          <source>In proceedings of the fourth Groningen Int. Information Tech. Conf. for students</source>
          (
          <year>1997</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Bordat</surname>
            ,
            <given-names>J.P.</given-names>
          </string-name>
          <article-title>Calcul pratique du treillis de Galois d une correspondance</article-title>
          .
          <source>Mathématiques et Sciences Humaines</source>
          , Vol.
          <volume>96</volume>
          (
          <year>1986</year>
          )
          <fpage>31</fpage>
          -
          <lpage>47</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>Fu</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>Mephu</given-names>
            <surname>Nguifo</surname>
          </string-name>
          ,
          <string-name>
            <surname>E.</surname>
          </string-name>
          <article-title>A parallel algorithm to generate formal concepts for large data</article-title>
          .
          <source>In proceedings of the second international conference on formal concept analysis (ICFCA04)</source>
          , springer LNCS, Sydney
          <string-name>
            <surname>Australia</surname>
          </string-name>
          (
          <year>2004</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <surname>Diday</surname>
          </string-name>
          . E,
          <string-name>
            <surname>Emilion. R. Treillis de Galois</surname>
          </string-name>
          maximaux et capacités de Choquet. Cahier de Recherche de l Acadèmie des Sciences. Paris, t.
          <volume>325</volume>
          ,
          <string-name>
            <surname>Série</surname>
            <given-names>I</given-names>
          </string-name>
          , p.
          <fpage>261</fpage>
          -
          <lpage>266</lpage>
          ,
          <year>1997</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17.
          <string-name>
            <surname>Diday</surname>
          </string-name>
          . E, Emilion. R.
          <article-title>Maximal and stochastic Galois lattices</article-title>
          ,
          <source>Discrete Applied Mathematics</source>
          ,
          <volume>127</volume>
          , (
          <year>2003</year>
          ),
          <fpage>271</fpage>
          -
          <lpage>284</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>