<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Enumerating Pseudo-Intents in a Partial Order</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Universit´e Pierre et Marie Curie</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Laboratoire d'Informatique de Paris</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Paris</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>France Alexandre.Bazin@lip</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>.fr Jean-Gabriel@Ganascia.name</string-name>
        </contrib>
      </contrib-group>
      <fpage>45</fpage>
      <lpage>56</lpage>
      <abstract>
        <p>The enumeration of all the pseudo-intents of a formal context is usually based on a linear order on attribute sets, the lectic order. We propose an algorithm that uses the lattice structure of the set of intents and pseudo-intents to compute the Duquenne-Guigues basis. We argue that this method allows for efficient optimizations that reduce the required number of logical closures. We then show how it can be easily modified to also compute the Luxenburger basis.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>c paper author(s), 2013. Published in Manuel Ojeda-Aciego, Jan Outrata (Eds.): CLA
2013, pp. 45{56, ISBN 978{2{7466{6566{8, Laboratory L3i, University of La
Rochelle, 2013. Copying permitted only for private and academic purposes.</p>
    </sec>
    <sec id="sec-2">
      <title>Notations</title>
      <p>In formal concept analysis, data is represented by a triple C = (O, A, R), called
formal context, where O is a set of objects, A a set of binary attributes and
R ⊆ O × A a relation assigning a set of attributes to each object. An object o
is said to be described by an attribute set A iff (o, a) ∈ R for every a ∈ A.</p>
      <p>Two derivation operators are commonly employed and defined as follows :
.0 : 2A → 2O
.0 : 2O → 2A
A0 = {o ∈ O | ∀a ∈ A, (o, a) ∈ R}
(1)
(2)</p>
      <p>O0 = {a ∈ A | ∀o ∈ O, (o, a) ∈ R}</p>
      <p>The compositions of these two operators are closure operators and we will
use the notation .00 for both. A set A is said to be closed if A = A00. A closed
attribute set is called an intent and a closed object set an extent. A pair (E, I)
where E ⊆ O and I ⊆ A is called a formal concept of the context if I = E0 and
E = I0 . A formal concept (A, B) is said more general than a concept (C, D) if
C ⊂ A or B ⊂ D. The set of all concepts of a given context ordered in this way
is a complete lattice called concept lattice of the context.</p>
      <p>An implication is a relation between two attribute sets A and B, denoted
by A → B. An implication A → B is said to hold in a context if every object
described by A is also described by B. We write A ≡ B if A → B and B → A.
The implication A → A00 always holds in the context.</p>
      <p>An attribute set A is a pseudo-intent if it is not an intent and P 00 ⊂ A
for all pseudo-intents P ⊂ A. The set B = {A → A00 | A is a pseudo-intent},
called the canonical basis, is a set of implications of minimum cardinality such
that every implication that holds in the context can be inferred from it through
the application of the Armstrong rules. That is, it is the minimum number of
implications containing all information on the relations between attribute sets
in the context.</p>
      <p>
        Computing the canonical basis of a context thus requires the enumeration of
its pseudo-intents. This presents some challenges as the number of pseudo-intents
can be exponential in the size of the context (the number of attributes), as shown
in [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. Moreover, whether pseudo-intents can be enumerated without intents is
still an open problem and the number of intents can itself be exponential in the
number of pseudo-intents.
      </p>
      <p>
        Next Closure
The most known algorithm for computing the canonical basis is Ganter’s Next
Closure [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. For an arbitrary linear order &lt; on A, a linear order, called lectic
order, ≤lec on attribute sets is defined as follows:
      </p>
      <p>A ≤lec B ⇔ min(AΔB) ∈ B
Δ is the symmetric difference. The implicational closure of an attribute set
A with respect to a set of implications L, usually noted L(A), is the smallest
superset of A such that, for every X ⊆ L(A), we have X → Y ∈ L ⇒ Y ⊆ L(A).
The implicational closure is a closure operator. The implicational pseudo-closure
of an attribute set A, noted L−(A), is the smallest superset of A such that, for
every X ⊂ L(A), we have X → Y ∈ L ⇒ Y ⊆ L(A).</p>
      <p>It is shown that both intents and pseudo-intents are closed under B−. Next
Closure, in its original form, computes closed sets for a given closure operator.
Applied to the computation of pseudo-intents, it goes through implicationally
pseudo-closed subsets of A in lectic order and checks whether the result is an
intent or a pseudo-intent. The lectic order being linear, every set is computed
once and only once. For an attribute set A and an attribute i, the operator ⊕ is
defined as follows :</p>
      <p>A ⊕ i = B−({a ∈ A | a ≤ i} ∪ {i})</p>
      <p>
        The closed set that immediately follows A in the lectic order is A ⊕ i with i
maximal such that min((A ⊕ i) \ A). For each set closed under B−, Next Closure
tests whether it is an intent or a pseudo-intent and then constructs the next set
with the operator ⊕ until it reaches A. Numerous optimizations can be added
to reduce the number of implicational closures. For example, as proposed in [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ],
if N ext(A) = A ⊕ i = B and min(B00 \ A) &lt; max(A) then we can continue as
if A ⊕ i had been rejected by Next Closure. It is the only one we will consider
here as it is specific to Next Closure and has no incidence on our algorithm.
3
3.1
      </p>
    </sec>
    <sec id="sec-3">
      <title>Algorithm</title>
      <sec id="sec-3-1">
        <title>Principles</title>
        <p>
          We consider the lattice Φ = ({A = B−(A) | A ⊆ A}, ⊆) of attribute sets closed
under B− ordered by the inclusion relation. Enumerating pseudo-intents is
enumerating all the elements of Φ and testing whether they are intents or
pseudointents. As with other enumeration problems, we want to make sure every
element is found once and only once. The other great FCA problem, computing the
formal concepts of a context, can be viewed as the enumeration of the elements
of ({B(A) | A ⊆ A}, ⊆). It has been extensively studied (see [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ] for a
comparison of different algorithms) and various approaches have been proposed for both
generating elements from previous ones and checking whether an attribute set
has already been found (or even preventing the generation of redundant sets).
Surprisingly, it seems only Next Closure is commonly used for the problem of
pseudo-intents, non-batch algorithms excluded. Indeed, it is very efficient in that
the lectic order on attribute sets allows for the generation of the next set
without stocking more than the current set and the total order ensures we do not
generate them more than once. However, we feel that the fact that sets are not
always generated by one of their subsets is a weakness for some applications,
such as data mining. We propose to modify Next Closure with techniques from
some algorithms for formal concepts that make use of the lattice structure.
        </p>
        <p>The lectic order has the interesting property that a set B is greater than all
its subsets. If B ∈ Φ is supposed to be generated by a single set, it should be
one that is easily identifiable such as its lectically greatest subset A ∈ Φ. For any
two A and B in Φ, we shall use A B to denote the fact that A is the lectically
greatest subset of B closed under B− or, equivalently, that B is generated from
A. We say that A is the predecessor of B and B is a successor of A.
Proposition 1. The relation
graph of Φ.</p>
        <p>defines a spanning tree of the neighbouring
Proof. Any non-empty set B has at least one strict subset in Φ. We call A its
lectically greatest subset in Φ. The lectic order respects the inclusion relation so
there is no C such that A ⊂ C ⊂ B and A is a lower cover of B. Since every
non-empty B ∈ Φ has a single lower cover A such that A B, the successor
relation defines a spanning tree of the neighbouring graph of Φ.
Proposition 2. For any two A, B ∈ Φ, the attribute set B is an upper cover
of A if and only if B = B−(A ∪ {i}) for every i ∈ B \ A.</p>
        <p>Proof. If B is an upper cover of A, then there is no C such that A ⊂ B−(C) ⊂ B.
We have B−(A ∪ {i}) ⊆ B when i ∈ B \ A, so B = B−(A ∪ {i}) for all i ∈ B \ A.</p>
        <p>If B = B−(A ∪ { }</p>
        <p>i ) for every i ∈ B \ A, given that A ∪ {i} is the smallest
subset of B that contains both A and i, then there is no superset of A that is a
strict subset of B.</p>
        <p>Attribute sets are generated according to the tree, starting from ∅. The
recursive nature of the definition of a pseudo-intent prevents us from computing
a set before the computation of all its subsets. The lectic order solves this
problem efficiently but our tree contains only the knowledge of the lectically greatest
subset. In order to make sure that every subset has been found first, we
propose to consider attribute sets in order of increasing cardinality. This way, when
an intent or pseudo-intent of cardinality n is considered, we are sure that all
pseudo-intents of cardinality n − 1 have been found.</p>
        <p>Proposition 3. If i &lt; min(A) and A ⊂ A ⊕ i, then A ⊕ i has a subset in Φ that
is lectically greater than A.</p>
        <p>Proof. If i &lt; min(A), then B−({a ∈ A | a ≤ i} ∪ {i}) = B−({i}). If A ⊂ A ⊕ i,
then {i} is a pseudo-intent that is a subset of A ⊕ i and is lectically greater than
A.
Algorithm 1 Successors(A)
Proposition 4. For any two attribute sets A and B such that A ⊂ B, A
B ⇔ ∀i ∈ B \ A, A ⊕ i = B.</p>
        <p>Proof. If ∀i ∈ B \ A, A ⊕ i = B, then B has no subset in Φ lectically greater
than A. As such, ∀i ∈ B \ A, A ⊕ i = B ⇒ A B.</p>
        <p>If A B and ∃i ∈ B \ A such that A ⊕ i ⊂ B, then B has a subset in Φ
lectically greater than A, which contradicts the hypothesis. As such, A B ⇒
∀i ∈ B \ A, A ⊕ i = B.</p>
        <p>So, in order to generate all the successors of A, it suffices to compute A ⊕ i
for every i greater than min(A). If a resulting attribute set is an upper cover of
A it is a successor.</p>
        <p>Testing whether a new attribute set B = A ⊕ i is a successor of A is easy
as we know A ⊕ j for every j ∈ B \ A. Algorithm 1 uses this to compute the
successors of an attribute set.</p>
        <p>If an attribute set B is a pseudo-intent, it is a ∧-irreducible element of Φ.
That is, it has a single upper cover. If B = B−(A), the set B00 is the lectically
next set after B if max(A) ≤ min(A00 \ A). That way, if an attribute set is a
pseudo-intent, we do not have to explicitly compute its successors.</p>
        <p>Algorithm 2 uses Algorithm 1 to go through the intents and pseudo-intents
of the context and compute its canonical basis. It starts with ∅ and computes
the closure of all the attribute sets of the same cardinality simultaneously before
generating their successors. However, when an intent A is considered and its
successors generated, only the pseudo-intents of cardinality lesser than | | + 1
A
have been found so the A ⊕ i computed at this point might be different from
the final A ⊕ i. For example, if |A ⊕ i| − |a| = 2, an implication B → C with
|B| = | | + 1 and B ⊂ A ⊕ i will change the result. For this reason, we must</p>
        <p>A
split Algorithm 1 into two parts. First, the sets A ⊕ i are computed for all i once
every intent of cardinality | | have been found. Then, when A ⊕ i is considered</p>
        <p>A
as a Candidate, we must check whether it is still logically closed and, if it is, test
whether it is a successor with Proposition 4.
Proposition 5. Algorithm 2 terminates and produces the canonical basis of the
context.</p>
        <p>Proof. An attribute set can only be used to generate attribute sets of greater
cardinality. The set of attributes is finite, so the algorithm terminates when it
reaches A.</p>
        <p>Every element of Φ, except ∅, has a lower neighbour that is a predecessor in
the spanning tree. If we generate the attribute sets along the tree, the lattice
Φ being complete, every one of its elements is considered at least once. If an
attribute set A is not equal to B−(A) it is not a pseudo-intent so all
pseudointents are found.</p>
        <p>
          During Algorithm 2, Algorithm 1 is applied to every intent to generate
attribute sets. Their closure is then computed. As such, Algorithm 2 is in
O(|Φ| × (X + Y )) where X is the complexity of computing the closure of a
set and Y = |A| × L is the complexity of generating successors that depends on
the complexity L of the saturation algorithm used. A study of algorithms for
computing the implicational closure and their use in our problem can be found
in [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ]. Note that, since every potential pseudo-intent P is constructed from an
intent I with I ⊂ P , computing P 00 can be done on the subcontext containing
the objects of I0.
3.2
        </p>
        <p>Improvements
Additional reduction of the number of implicational closures required to compute
the base can be achieved by making use of the property that every attribute set
is constructed by adding an attribute to one of its subsets.</p>
        <p>Proposition 6. If B = A ⊕ i is a successor of A with i = min(B \ A), then
B ⊕ j = A ⊕ j for every attribute j &lt; i.</p>
        <p>Proof. If j &lt; i, then B−({b ∈ B | b &lt; j} ∪ {j}) = B−({a ∈ A | a &lt; j} ∪ {j})
because i = min(B \ A). Thus, we have B ⊕ j = A ⊕ j.</p>
        <p>This lets us use the knowledge acquired on a set A to reduce the number
of necessary logical closures on its supersets. Indeed, for any B = A ⊕ i with
i = min(B \ A), the set B ⊕ j is already known for every attribute lesser than i.
This means that, in order to compute the successors of B, we need only to use
the ⊕ operator with attributes greater than i.</p>
        <p>We can also use the fact that there are pseudo-intents P such that P 00 = A,
which means that no object of the context is described by P . Indeed, for an
intent A such that A A ⊕ i, there are two possibilities for i. If i ∈ So∈A0 {o}0,
meaning there is at least an object described by A ∪ {i}, the set A ⊕ i is either an
intent or a pseudo-intent. If i ∈ A \ So∈A0 {o}0, meaning no object is described
by A ∪ {i}, the closure of A ⊕ i is A and, for any superset B of A, the set B ⊕ i
is either equal to A ⊕ i or A. This means that computing B ⊕ i is unnecessary
for every successors B of A in the spanning tree.
4
4.1</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Computing a Base for Partial Implications</title>
      <sec id="sec-4-1">
        <title>Luxenburger Basis</title>
        <p>A partial implication between two attribute sets A and B, otherwise called
association rule, is a relation of the form A →s,c B where s is called the support and
c the confidence. It means that s% of the objects of the context are described
by A and that c% of them are also described by B. Implications, as defined
in Section 2, are partial implications with a confidence of 100%. An attribute
set A having the same support as A00, the confidence and support of a partial
implication A →s,c B can be derived from A00 →s,c B00. As such, to obtain a
basis for partial implications, one needs only to know the intents of the formal
context.</p>
        <p>
          It has been shown in [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ] that a minimal basis for partial implications, called
Luxenburger basis, is obtained by considering a set {A →s,c B | A = A00 and
B = B00} that forms a spanning tree of the neighbouring graph of the intent
lattice such that A is the conclusion of a single partial implication.
4.2
        </p>
      </sec>
      <sec id="sec-4-2">
        <title>Modification of the Algorithm</title>
        <p>In Algorithm 2, every attribute set is generated from a single intent with the
exception of some essential intents that lectically follow a pseudo-intent. We
would like those intents to be constructed from their lectically greatest subset
that is an intent instead of just their lectically greatest subset. Let us suppose
that we have an intent A and a pseudo-intent B such that A B and B B00.
The lectically greatest intent that is a subset of B00 is either A or a superset of
A so it can be constructed by adding attributes of B00 \ A to A.</p>
        <p>Algorithm 3 computes C for a given A, B and B00.</p>
        <p>Algorithm 3 Lectically Greatest Sub-Intent
1: C = A
2: for every attribute i in B00 \ A in increasing order do
3: if B(C ∪ {i}) 6= B00 then
4: C = C ∪ {i}
5: end if
6: end for
7: return C
Proposition 7. Algorithm 3 terminates and computes the lectically greatest
intent that is a subset of B00
Proof. There is a finite number of attributes and the loop goes through them
in a total order so the algorithm terminates. The attribute set it returns, C, is
either A or closed under B(.), so it is an intent. It is the implicational closure of
a subset of B00 and it is not equal to B00 so we have C ⊂ B00. Let us suppose that
there is an intent D ⊂ B00 lectically greater than C with i = min(CΔD). The
attribute i is such that B(X ∪ {i}) ⊂ B00 for any A ⊆ X ⊆ D so the algorithm
must have added it to C and we have C = D.</p>
        <p>Thus, the algorithm returns C, the greatest intent that is a subset of B00.</p>
        <p>If, in Algorithm 2, we maintain the spanning tree we used to generate
attribute sets, it is easy to apply Algorithm 3 to every intent that lectically follows
a pseudo-intent. If we change the predecessor of those intents to the attribute
set computed by Algorithm 3 we obtain a spanning tree of Φ in which the
predecessor of every intent is an intent. As A also has a unique predecessor, it gives us
the Luxenburger basis of the context along with the Duquenne-Guigues basis.
5</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>Example</title>
      <p>We apply Algorithm 2 on the following context with a &lt; b &lt; c &lt; d &lt; e :
The algorithm starts with ∅. It is closed so we generate its successors.</p>
      <p>The set of Candidates is then {a, b, c, d, e}. Among them, only a is a
pseudointent with a00 = ab so we add a → ab to the basis. Moreover, ab lectically follows
a so we add ab to Candidates. Its lectically greatest subset that is an intent is b.
We then generate the successors of b, c, d and e. There are no attributes greater
than min(e) = e so e has no successors.</p>
      <p>b ⊕ e = be c ⊕ e = ce d ⊕ e = de
b ⊕ d = bd c ⊕ d = cd
b ⊕ c = bc</p>
      <p>The set of Candidates is then {ab, bc, bd, be, cd, ce, de}. Among them, bc, be
and cd are pseudo-intents so we add bc → bcd, be → bde and cd → bcd to B. The
set bcd lectically follows bc so we add bcd to Candidates. Its lectically greatest
subset that is an intent is bd. We then generate the successors of ab, bd, ce and
de. The set de has no successors for the same reasons as the set e in the previous
step and we already know that c ⊕ d = cd so ce ⊕ d is known and ce has no
successors.</p>
      <p>ab ⊕ e = abde bd ⊕ e = bde
ab ⊕ d = abd
ab ⊕ c = abcd</p>
      <p>The set of Candidates is then {abd, bcd, bde}. Among them, abd is a pseudo
intent so we add abd → abcde to B. The set abcde lectically follows abd so we
add abcde to Candidates. Its lectically greatest subset that is an intent is ab. We
then generate the successors of bcd and bde. We already know bd ⊕ c so we also
know bde ⊕ c. As such, computing bde ⊕ c is not needed and no other attribute
is available so bde has no successors.</p>
      <p>bcd ⊕ e = bcde</p>
      <p>The set of Candidates is then {bcde, abcde}. Only bcde has a cardinality of 4
and it is a pseudo-intent so we add bcde → abcde to B. There are no new intents
so we continue with the elements of Candidates of cardinality 5. The set abcde
is an intent and has no successors since it is equal to A.</p>
      <p>The algorithm stops having produced the following implications :
– a → ab
– bc → bcd
– be → bde
– cd → bcd
– abd → abcde
– bcde → abcde</p>
      <p>This is the Duquenne-Guigues basis of the context. In addition, the following
spanning tree of the intent lattice has been constructed :</p>
      <p>It corresponds to the following set of partial implications :
– ∅ →1,0.6 b
– ∅ →1,0.4 c
– ∅ →1,0.6 d
– ∅ →1,0.6 e
– b →0.6,0.33 ab
– b →0.6,0.66 bd
– c →0.4,0.5 ce
– d →0.6,0.66 de
– ab →0.2,0 abcde
– bd →0.4,0.5 bcd
– bd →0.4,0.6 bde
To compute it, Algorithm 3 has been used a total of 3 times.</p>
      <p>To compute the Duquenne-Guigues basis of this context, our algorithm has
performed 16 logical closures. Since the version of Next Closure using the
optimizations presented in Section 2.1 would have performed only 15, it is more
efficient in this case. However, on the context presented in Figure 3, our
algorithm needs only 16 logical closures while Next Closure performs 19 of them.
This is due to the fact that attributes are not considered if they have been used
to generate an implication P → A and this context has a number of pairs of
attribute that never appear together. This leads us to believe that our algorithm
may be more efficient in those kinds of cases.
We proposed an algorithm that computes the Duquenne-Guigues basis of a
formal context. It makes use of the lattice structure of the set of both intents
and pseudo-intents in a fashion similar to that of algorithms for computing the
concept lattice. The construction of attribute sets is inspired by Next Closure,
the most common batch algorithm for computing pseudo-intents, in that it uses
the lectic order to ensure the uniqueness of generated sets while avoiding
going through what has already been computed. We showed that the algorithm
can easily be modified, without much loss complexity-wise, to produce both the
Duquenne-Guigues and the Luxenburger bases. Those two bases together are
sought after in the domain of association rules mining where it is crucial to
obtain a minimal number of rules. They are usually derived from the set of frequent
concepts that has to be computed first and, to the best of our knowledge, no
algorithm is able to compute them both directly and at the same time without
a significant increase in complexity.</p>
      <p>Some improvements can be made on the generation of the attribute sets. Most
notably, the inclusion test in Algorithm 1 could be done during the computation
of the implicational closure and multiple closures could be realized
simultaneously. The fact that attribute sets are effectively generated following a partial
order instead of a total order could also permit some degree of parallelization.
Such optimizations will be the subject of further research on our part.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <given-names>Konstantin</given-names>
            <surname>Bazhanov</surname>
          </string-name>
          and
          <string-name>
            <surname>Sergei A. Obiedkov.</surname>
          </string-name>
          <article-title>Comparing performance of algorithms for generating the Duquenne-Guigues basis</article-title>
          .
          <source>In CLA</source>
          , pages
          <fpage>43</fpage>
          -
          <lpage>57</lpage>
          ,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <given-names>Felix</given-names>
            <surname>Distel</surname>
          </string-name>
          and
          <article-title>Barı¸s Sertkaya. On the complexity of enumerating pseudo-intents</article-title>
          .
          <source>Discrete Applied Mathematics</source>
          ,
          <volume>159</volume>
          (
          <issue>6</issue>
          ):
          <fpage>450</fpage>
          -
          <lpage>466</lpage>
          ,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <given-names>V.</given-names>
            <surname>Duquenne</surname>
          </string-name>
          and
          <string-name>
            <given-names>J.-L.</given-names>
            <surname>Guigues</surname>
          </string-name>
          .
          <article-title>Famille minimale d'implications informatives r´esultant d'un tableau de donn´ees binaires</article-title>
          .
          <source>Math´ematiques et Sciences Humaines</source>
          ,
          <volume>24</volume>
          (
          <issue>95</issue>
          ):
          <fpage>5</fpage>
          -
          <lpage>18</lpage>
          ,
          <year>1986</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <given-names>Bernhard</given-names>
            <surname>Ganter</surname>
          </string-name>
          .
          <article-title>Two basic algorithms in concept analysis</article-title>
          .
          <source>In Proceedings of the 8th international conference on Formal Concept Analysis, ICFCA'10</source>
          , pages
          <fpage>312</fpage>
          -
          <lpage>340</lpage>
          , Berlin, Heidelberg,
          <year>2010</year>
          . Springer-Verlag.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Sergei</surname>
            <given-names>O.</given-names>
          </string-name>
          <string-name>
            <surname>Kuznetsov</surname>
          </string-name>
          .
          <article-title>On the intractability of computing the Duquenne-Guigues base</article-title>
          .
          <source>Journal of Universal Computer Science</source>
          ,
          <volume>10</volume>
          (
          <issue>8</issue>
          ):
          <fpage>927</fpage>
          -
          <lpage>933</lpage>
          ,
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Sergei</surname>
            <given-names>O.</given-names>
          </string-name>
          <string-name>
            <surname>Kuznetsov</surname>
            and
            <given-names>Sergei</given-names>
          </string-name>
          <string-name>
            <surname>Obiedkov</surname>
          </string-name>
          .
          <article-title>Comparing performance of algorithms for generating concept lattices</article-title>
          .
          <source>Journal of Experimental and Theoretical Artificial Intelligence</source>
          ,
          <volume>14</volume>
          :
          <fpage>189</fpage>
          -
          <lpage>216</lpage>
          ,
          <year>2002</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <given-names>Michael</given-names>
            <surname>Luxenburger</surname>
          </string-name>
          .
          <article-title>Implications partielles dans un contexte</article-title>
          .
          <source>Math´ematiques et Sciences Humaines</source>
          ,
          <volume>113</volume>
          :
          <fpage>35</fpage>
          -
          <lpage>55</lpage>
          ,
          <year>1991</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <given-names>S.</given-names>
            <surname>Obiedkov</surname>
          </string-name>
          and
          <string-name>
            <given-names>V.</given-names>
            <surname>Duquenne</surname>
          </string-name>
          .
          <article-title>Attribute-incremental construction of the canonical implication basis</article-title>
          .
          <source>Annals of Mathematics and Artificial Intelligence</source>
          ,
          <volume>49</volume>
          (
          <issue>1-4</issue>
          ):
          <fpage>77</fpage>
          -
          <lpage>99</lpage>
          ,
          <year>April 2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <given-names>Petko</given-names>
            <surname>Valtchev</surname>
          </string-name>
          and
          <string-name>
            <given-names>Vincent</given-names>
            <surname>Duquenne</surname>
          </string-name>
          .
          <article-title>On the merge of factor canonical bases</article-title>
          .
          <source>In Proceedings of the 6th international conference on Formal concept analysis</source>
          ,
          <source>ICFCA'08</source>
          , pages
          <fpage>182</fpage>
          -
          <lpage>198</lpage>
          , Berlin, Heidelberg,
          <year>2008</year>
          . Springer-Verlag.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>