=Paper= {{Paper |id=Vol-433/paper-14 |storemode=property |title=Extending Attribute Exploration by Means of Boolean Derivatives |pdfUrl=https://ceur-ws.org/Vol-433/paper10.pdf |volume=Vol-433 }} ==Extending Attribute Exploration by Means of Boolean Derivatives== https://ceur-ws.org/Vol-433/paper10.pdf
      Extending Attribute Exploration by Means of
                 Boolean Derivatives

          J. Antonio Alonso-Jiménez1 , Gonzalo A. Aranda-Corral1 , Joaquı́n
           Borrego-Dı́az1 , and M. Magdalena Fernández-Lebrón2 , M. José
                                  Hidalgo-Doblado1
           1
            Departamento de Ciencias de la Computación e Inteligencia Artificial
                           2
                             Departamento de Matemática Aplicada I
      E.T.S. Ingenierı́a Informática, Universidad de Sevilla, Avda. Reina Mercedes s.n.
                                       41012-Sevilla, Spain

          Abstract. We present a translation of problems of Formal Context
          Analysis into ideals problems in F2 [x] through the Boolean derivatives.
          The Boolean derivatives are introduced as a kind of operators on propo-
          sitional formulas which provide a complete calculus. They are useful to
          refine stem basis as well as for extending attribute exploration.


  1     Introduction
  Attribute exploration (cf. [3]) is a family of interactive procedures for Knowledge
  Acquisition (KA) in Formal Concept Analysis (FCA), whose goal is to build a
  knowledge base of the attributes we are working with. The procedures used in
  FCA have nice computer implementations, existing even generalizations for the
  management of the background information. Sometimes attribute exploration is
  hard or tedious to apply. Thus, it may advisable to use automated tools. Many
  Computer Algebra Systems (CAS) provide tools for working with discrete data,
  for example, Gröbner basis. Since it is possible to translate entailment problems
  into ideal problems in finite fields, Gröbner basis is a powerful tool for reasoning
  in propositional logic [8, 9, 2].
      Our aim is to extend the framework of attribute exploration through the
  introduction of Boolean derivatives and the assistance of a CAS. The CAS that
  we will use CoCoA (http://cocoa.dima.unige.it/), very well suited for our
  pourposes because of its easy management of Gröbner basis and related tools.
  The paper is organized as follows. The next section reviews the relationship
  between propositional logic and the ring F2 [x], as well as the basics of FCA. In
  the third section the Boolean derivatives are introduced, as well as a complete
  polynomial calculus based on them. An algebraic characterization of sensitivity
  for implications in FCA is given in forth section. In fifth and sixth sections new
  versions of attribute exploration are introduced, and in section 7 an application
  to graph theory is given. We conclude with some remarks about future work.

  2     Background
  We assume that the reader is familiar with propositional logic and polynomial
  algebra on positive characteristics. We setup a propositional language P V =

c Radim Belohlavek, Sergei O. Kuznetsov (Eds.): CLA 2008, pp. 121–132,
  ISBN 978–80–244–2111–7, Palacký University, Olomouc, 2008.
122      José Antonio Alonso-Jiménez, Gonzalo A. Aranda-Corral, Joaquı́n Borrego-
         Dı́az, M. Magdalena Fernández-Lebrón, M. José Hidalgo-Doblado

{p1 , . . . , pn }, P F orm will denote the set of propositional formulas, and var(F )
denotes the set of variables of the propositional formula F .
    The ring in which we are working is F2 [x] (where x = x1 , . . . , xn ). A key ideal
is I2 := (x1 + x21 , . . . , xn + x2n ). To clarify our proposition, let fix an identification
pi 7→ xi (or p 7→ xp ) between PV and the set of indeterminates.
    Given α = (α1 , . . . αn ) ∈ Nn , let us define |α| := max{α1 , . . . , αn }, and
sg(α) := (δ1 , . . . , δn ), where δi is 0 if αi = 0 and 1 otherwise. The degree of a(x) ∈
F2 [x], is deg∞ (a(x)) :=max{|α| : xα is a monomial of a}, and degi (a(x)) is the
degree w.r.t. xi . If deg∞ (a(x)) ≤ 1, a(x) is called a polynomial formula.
    Three maps represent the relationship between propositional logic and F2 [x]:
                                                X            X
  – Φ : F2 [x] → F2 [x] is defined by Φ(             xα ) :=    xsg(α) .
                                             α∈I           α∈I
 – The map P : P F orm → F2 [x] is defined by the following equations
    • P (⊥) = 0, P (pi ) = xi , P (¬F ) = 1 + P (F )
    • P (F1 ∧F2 ) = P (F1 )·P (F2 ) and P (F1 ∨F2 ) = P (F1 )+P (F2 )+P (F1 )P (F2 )
    • P (F1 → F2 ) = 1 + P (F1 ) + P (F1 )P (F2 ), and
    • P (F1 ↔ F2 ) = 1 + P (F1 ) + P (F2 )
 – Θ : F2 [x] → P F orm is defined by
    • Θ(0) = ⊥, Θ(1) = >, Θ(xi ) = pi ,
    • Θ(a · b) = Θ(a) ∧ Θ(b), and Θ(a + b) = ¬(Θ(a) ↔ Θ(b)).
We have that Θ(P (F )) ≡ F and P (Θ(a)) = a. Since we shall frequently be
applying Φ ◦ P , we define the polynomial projection as π := Φ ◦ P .
    Regarding valuations and polynomials, the key fact is that, if v : P V → {0, 1}
is a valutation with v(pi ) = δi , then for every F ∈ P F orm,

                                 v(F ) = P (F )(δ1 , . . . δn )

    The behaviour of the ideals of F2 [x] is well known: If A ⊆ (F2 )n , then
V (I(A)) = A, and for every I ∈ Ideals(F2 [x]), I(V (I)) = I + I2 . Therefore
F ≡ F 0 if and only if P (F ) = P (F 0 ) (mod I2 ) which is also equivalent to
Φ ◦ P (F ) = Φ ◦ P (F 0 ). The following theorem states the main relationship be-
tween propositional logic and F2 [x]:
Theorem 1. The following conditions are equivalent:
(1) {F1 , . . . , Fm } |= G.
(2) 1 + P (G) ∈ (1 + P (F1 ), . . . , 1 + P (Fn )) + I2 .
(3) NF(1 + P (G), GB [(1 + P (F1 ), . . . , 1 + P (Fm )) + I2 ]) = 0.
  (where GB denotes Gröbner basis) and NF denotes normal form.
    In the rest of this section we succintly present some elements of FCA we use,
although we assume that the reader knows the basic principles of this theory (the
fundamental reference is [3]). We represent a formal context as M = (O, A, I),
which consists of two sets, O (the objects) and A (the attributes) and a relation
I ⊆ O × A. Finite contexts can be represented by a 1-0-table (representing I as
a Boolean function on O × A). The main goal in FCA is the computation of the
concept lattice associated to the context.
           Extending Attribute Exploration by Means of Boolean Derivatives     123


    Basic logical expressions in FCA are implication between attributes, that is,
pair of sets of attributes written as Y1 → Y2 . Truth with respect to M = (O, A, I)
is defined as follows. A subset T ⊆ A respects Y1 → Y2 if Y1 6⊆ T or Y2 ⊆ T .
We say that Y1 → Y2 holds in M (M |= Y1 → Y2 ) if for all o ∈ O, the set {o}0
respects Y1 → Y2 . In that case we say that Y1 → Y2 is an implication V of MV .
    From a propositional logic viewpoint, Y1 → Y2 is the formula Y1 → Y2 ,
so it is equivalent to a set of Horn clauses (implications with a singleton as
right-hand side). On the other hand, the definition of truth can be extended:
Given Y ⊆ A, define ¬Y := Y → ⊥, and it holds in the context if for all
o ∈ O, Y 6⊆ {o}0 . Given a formula written with {→, ⊥}, M |= F can be defined
in the natural way. Since this set of connectives is functionally complete, truth
definition can be extended to P F orm.

Definition 1. Let L be a set of implications and L an implication of M .
• L follows from L (L |= L) if each subset of A respecting L also respects L.
• L is closed if every implication following from L is already in L.
• L is complete if every implication of the context follows from L.
• L is non-redundant if for each L ∈ L, L \ {L} 6|= L.
• L is a stem basis for M if it is complete and non-redundant.

   For every context we can obtain a stem basis from the pseudo-intents:

Theorem 2. [7] The set L = {Y → Y 00 : Y is a pseudointent} is a stem basis.

Actually one can choose Y → Y 00 \ Y instead of Y → Y 0 , so we will assume,
by default, that for every implication Y1 → Y2 belonging to a stem basis Y1
and Y2 are disjoint. Such a basis for the example of figure 5 (left) is L = {∅ →
N, {N, A} → {M o}, {N, Le} → {M o}}.
   The called Amstrong rules facilitates implicational reasoning:

                                 X→Y                   X → Y, Y ∪ Z → W
    R1 :                 R2 :                   R3 :
             X→X                X ∪Z →Y                   X ∪Z →W
It has that A set of implications L is closed if and only if the set is closed by
Amstrong rules [1]. A consequence of Amstrong result is that, if `A denotes the
proof notion associated to Amstrong rules, stem basis are `A -complete, that is:
Theorem 3. Let L be a stem basis for M , and L an implication. Then M |= L
if and only if L `A L
    The computing of stem basis may be expensive if the set of objects is large.
Even it is possible we do not have the complete context M , or it has a potentially
infinite set of objects. Attribute exploration is an interactive procedure designed
to obtain a stem basis starting with a set H of good examples generating the
subcontext
                            M H := (H, A, I ∩ (H × A))
One expects that a stem basis associated to M H is also a stem basis for the
complete context. To guarantee it, we proceed as follows. Assume that L =
124       José Antonio Alonso-Jiménez, Gonzalo A. Aranda-Corral, Joaquı́n Borrego-
          Dı́az, M. Magdalena Fernández-Lebrón, M. José Hidalgo-Doblado


    1. Compute pseudo-intent: Find X a pseudo-intent for M H .
                                                             ?
    2. Soundness of the new implication: Ask to the user X → X 00 (the operators 0 are
       w.r.t. the subcontext). The user must react:
        – Confirming the suggested implication (adding it to L), or
        – giving o (a counterexample) such that {o}0 does not respect the implication.
           This is added to H, and the implication is discarded.


                                Fig. 1. Attribute exploration


{L1 , . . . , Lk } is a partial set of implications accepted as true, built from pseudo-
intents of M H 1 . Attribute exploration consists in a loop of the two steps shown
in fig. 1, and it stops when no new pseudointent is found (see [4] for variants).


3      Boolean derivatives and non-clausal theorem proving
We introduce an operator on propositional formulas as a translation of the usual
derivation on F2 [x]. In this section we review its basic properties (from [2]).
   Recall that a derivation on a ring R is a map d : R → R verifying that
d(a + b) = d(a) + d(b) and d(a · b) = d(a) · b + a · d(b)
Definition 2. A map ∂ : P F orm → P F orm is a Boolean derivation if there
exists a derivation d on the ring F2 [x] such that ∂ = Θ ◦ d ◦ π
                                                                ∂
      If the derivation on F2 [x] is d = ∂x∂ p , we denote ∂ as ∂p . It has that:

                                  ∂
                                     F ≡ ¬(F {p/¬p} ↔ F )
                                  ∂p
                       ∂
  Thus, the value of ∂p  F with respect to a valuation does not depend on p.
Therefore, we can apply valuations on P V \ {p} to this formula.
Definition 3. The independence rule (or ∂-rule) on polynomial formulas is
                                                     a1 , a2
      ∂x (a1 , a2 ) :                                ∂            ∂       ∂       ∂
                                                                                          
                        1 + Φ (1 + a1 · a2 )(1 + a1 · ∂x a2 + a2 · ∂x a1 + ∂x a1 · ∂x a2 )
   In order to simplify the notation, if ai = bi + xp · ci , with degxp (bi ) =
degxp (ci ) = 0 (i = 1, 2),. Then we can rewrite the values as:
                                        b1 + xp · c1 , b2 + xp · c2
            ∂xp (a1 , a2 ) :
                               Φ [1 + (1 + b1 · b2 )[1 + (b1 + c1 )(b2 + c2 )]]
   The rule is symmetric and generalizes resolution of non-tautological polyno-
mial clauses (see lemma 1). For formulas the rule is translated as
                            ∂p (F1 , F2 ) := Θ(∂xp (π(F1 ), π(F2 ))).
1
    Pseudointents are generated in lexicographic order. This way previously computed
    pseudointents are preserved by augmentations of H. See th. 27 in [3].
          Extending Attribute Exploration by Means of Boolean Derivatives           125


    It naturally induces a concept of proof, `∂ . A `∂ -refutation is a proof of ⊥. In
[2] the soundness and the refutational completeness of `partial has been proved
Theorem 4. [2] Let v : P V \ {p} → {0, 1}. The following conditions are equiv-
alent:
1. v |= ∂p (F1 , F2 ).
2. There exists an extension of v to P V is a model of {F1 , F2 }.
    For example, ∂x1 (x1 (1 + x2 ), x1 (1 + x2 )) = 1 + x2 . So the valuation v s.t.
v(¬p2 ) = 1 is the only one that we can extend to a model of p1 ∧ ¬p2 . When
∂p (π(F1 ), π(F2 )) = 1, every partial valuation is extendable to a model of {F1 , F2 }.
Theorem 5. [2] If Γ is inconsistent then Γ `∂ ⊥.
    Let be ∂p [Γ ] defined as ∂p [Γ ] := {∂p (F, G) : F, G ∈ Γ }.
    Given Q = {q1 , . . . , qk } ⊆ P V the operator ∂Q := ∂q1 ◦· · ·◦∂qk is well defined
modulo logical equivalence (by corollary 4, for every p, q ∈ P V , ∂p ◦ ∂q [Γ ] ≡
∂q ◦ ∂p [Γ ]). A consequence of corollary 4 and theorem 5 is that entailment can
be located on variables of the goal;
Corollary 1. Γ |= F        ⇐⇒      ∂P V \var(F ) [Γ ] |= F
    We can define an explicit equivalent expression for ∂p when it is applied
to implications. To simplify, suppose that the right-side of implications is a
singleton.
Lemma 1. Let Ci ≡ Y1i → Y2i be a implications (i = 1, 2, Y1i ∩ Y2j = ∅),
                        V          V
and Γ be a set of implications. Let ∂pc (C1 , C2 ) be the symmetric operator
                   
                   
                    {C1 , C2 }                      p∈ / var(C1 ) ∪ var(C2 )
                    {C   }                          p ∈   Y11 , p ∈
                                                                   / var(C2 )
                   
                        2
                   
      c
   ∂p (C1 , C2 ) := { Y1 → (Y2 \ {p}), C2 } p ∈ Y21 , p ∈
                       V 1       V 1
                                                                   / var(C2 )
                                                              1      2       1    2
                     {>}                             p ∈   (Y    ∩ Y1 ) ∪ (Y2 ∩ Y2 )
                   
                   
                   
                                                            1
                                                             1      2
                     {Resolventp (C1 , C2 )}         p ∈ Y1 ∩ Y2
                   

        c
                  S c                                    c
    If ∂p [Γ ] := {∂p (C1 , C2 ) : C1 , C2 ∈ Γ }, then ∂Q  [Γ ] ≡ ∂Q [Γ ] (Q ⊆ P V ).

4    Algebraic characterization of sensitive implications
We shall provide an algebraic treatment for implications on a fixed M = (O, A, I).
It is well know that every set X ⊆ (F2 )n is an algebraic set; that is, there exists
aX ∈ F2 [x] such that V (aX ) = X. If |A| = n, M is identified with a sub-
set X(M ) of (F2 )n (each object identified with the 1-0 expresion of its intent).
Let aM ∈ F2 [x] denote a polynomial formula such that V (aM ) = X(M ). Since
IV (aM ) = (aM ) + I2 , the coordinate ring of M is
                          F2 [x]/I(X(M )) ∼
                                          = (F2 [x]/(aM ) )/I2
One might also use an ideal JX such that V (JX ) = X, if it is better to work
with them (for example using CoCoA’s command IdealsofPoints). Thus we
can assume that I2 ⊆ JM . We choose aM only to simplify the proofs.
   Also, each o ∈ O defines a valuation vo defined by: vo (pi ) = 1 iff oIpi .
126     José Antonio Alonso-Jiménez, Gonzalo A. Aranda-Corral, Joaquı́n Borrego-
        Dı́az, M. Magdalena Fernández-Lebrón, M. José Hidalgo-Doblado

Proposition 1. Let F ∈ P F orm and let L be a stem basis. The following con-
ditions are equivalent:
(1) M |= F.
(2) 1 + π(F ) ∈ (aM ) + I2 .
    Moreover, if F is an implication, they are also equivalent to
(3) {P (L) : L ∈ L} ∪ {1 + π(F )} `∂ 0.
(4) ∂Pc V \var(F ) [L] |= F

Proof
    (1) ⇐⇒ (2): If M |= F , then V (aM ) ⊆ V (1 + π(F )). Thus, IV (1 + π(F )) ⊆
IV (aM ) hence 1 + π(F ) ∈ (aM ) + I2 . The converse is similar.
If F is an implication and M |= F , then L |= F . Therefore L ∪ {¬F } is incon-
sistent so by completeness, L ∪ {¬F } `∂ ⊥ hence we have (3). The converse is
true by soundness. (4) is equivalent to L |= F by lemma 1.
    We now deal with the problem of redundant arguments in implications. In
the worst case, the recognizing of redundancy requires a complete exploration of
intents. An argument is redundant if it is not sensitive:

Definition 4. A formula F is sensitive in p w.r.t. a formal context M if M 6|=
F {p/¬p} ↔ F . We say that F is sensitive w.r.t. M (or simply sensitive, if M
is fixed) if F is sensitive in all its variables.
                                                  ∂
    Thus, F is not sensitive in p iff M |= ¬ ∂p     F . In this case, there exists G with
var(G) =var(F ) \ {p} such that M |= F ↔ G (e.g. F {p/⊥}).
    Sensitive implications (also called proper implications) have several advan-
tages over implications obtained from pseudo-intents (see [10]). In attribute ex-
ploration, sensitivity analysis is justified: it is possible that implications are based
on a nonrepresentative set of examples, and thus they can be refined, basically
giving witnesses of the role of the arguments in the implication, or making them
more precise, removing redundant arguments:

Lemma 2. Let L = Y1 → Y2 be an implication. If M |= L and L is not sensitive
in p ∈ Y1 , then M |= Y1 \ {p} → Y2 . If p ∈ Y2 , then M |= ¬Y1 .

By default, sensitivity analysis for implications will be always restricted to at-
tributes in the left-hand side.

Proposition 2. Let p ∈var(F ). The following conditions are equivalent:
(1) F is sensitive in p w.r.t. M .
(2) ∂x∂ p π(F ) 6= 0 in the coordinate ring of M .


                                                                             ∂
Proof. (1) =⇒ (2): Assume vo 6|= F ↔ F {p/¬p} for some o ∈ O. Then vo |= ∂p    F,
                    ∂              ∂                 ∂
so V (aM ) 6⊆ V (π( ∂p F )) = V ( ∂xp π(F )), hence ∂xp π(F ) ∈
                                                              / (aM ) + I2 .
(2) =⇒ (1): If ∂x∂ p π(F ) ∈                                        ∂
                           / (aM ) + I2 , then O = V (aM ) 6⊆ V (π( ∂p F )). Therefore
                                   ∂
there exists o ∈ O such that vo |= ∂p F . Thus F is sensitive in p.
            Extending Attribute Exploration by Means of Boolean Derivatives                         127



 (3) Sensitivity test: If the implication has not been discarded, test whether the implication is
     sensitive in all its arguments w. r. t. the actual set H (using lemma 2 if necessary). If it is not
     sensitive in some of them, the user must to react:
       – Adding a new example o to H, witness of the sensitivity (that is, he/she thinks that it
          is sensitive), or
       – eliminating the attribute of the implication (it accepts it is not sensitive), changing the
          accepted implication by the refined one.



                    Fig. 2. Sensitivity test to add to algorithm of fig. 1


    One can recursively apply the above criteria (w.r.t. an order on P V ) to obtain
sensitive implications. If L is a stem basis and L0 is the refinement obtained, since
Amstrong’s rule R2 states Y1 \ {Y } → Y2 |= Y1 → Y2 , one has that L0 |= L. Thus
L0 is also a complete set of implications. The set L0 has an advantage over other
sets of proper implications (e.g. [10]) that it directly works on Duquenne-Guigues
basis so it does not need an specific algorithm to build it.


5    Variants of attribute exploration
We shall propose new steps for attribute exploration. All of them are investigated
with the translation into polynomials in mind. Although in the exposition we do
not explicitely use polynomials -the results and their proofs are more readable
in logical form- in practice they will be useful.
    The attribute exploration can be extended by adding a sensitivity test w.r.t
H (shown in fig. 2). Note that addition of a new object follows the formula
                                                      n
                    aH∪{(δ1 ,...,δn )} = Φ(aH · (1 + Πi=1 (xi + δi + 1)))

   For the running example, the implication N ∧ A → M o is obtained and
considered as sound. In this case, aM = x1 x2 x4 + x1 x2 + x1 x3 + x1 x4 + x1 + 1.
A Gröbner basis for aM + I2 is

 {x24 + x4 , x23 + x3 , x42 + x2 , x3 x4 + x3 , x2 x4 + x2 + x3 + x4 , x2 x3 + x2 , x1 + 1}
                                  ∂
It verifies (with CoCoA) that ∂x    1
                                      π(N ∧ A → M o) = x2 (1 + x3 ) ∈ (aM ) + I2
    We think that is not really sensitive in N (every live being needs water), so
we accept A → M o, which is now sensitive. Reasoning similarly with the other
one, it obtains {N, A → M o, L → M o}, a stem basis of sensitive implications.
    Sensitivity test can be also added when background knowledge exists. In this
case, we deal with hard problems as consistency checking or entailment. It starts
with H and a background knowledge Γ for M H , that is, M H |= Γ . Or, in
algebraic terms, V (aH ) ⊆ V ({1 + π(F ) : F ∈ Γ }). The step to add is in given
figure 3. Condition (∗) means

                              ∂
                       1+        π(L) ∈
                                      / ({1 + π(F ) : F ∈ Γ }) + I2
                             ∂xp
128       José Antonio Alonso-Jiménez, Gonzalo A. Aranda-Corral, Joaquı́n Borrego-
          Dı́az, M. Magdalena Fernández-Lebrón, M. José Hidalgo-Doblado


 (3) Sensitivity test: Test whether the implication is sensitive in all its arguments with the actual
     set H and the background knowledge Γ , that is, whether for each attribute p in the antecedent
     of L
                                 Γ 6|= L ↔ L{p/¬p}       (∗)
     If it is not sensitive in some of them, the user must to react:
        – Giving o ∈ O, a witness of the sensitivity (which is added to H), that is
                                               ∂
                                       vo |= {    L} ∪ Γ       (∗∗)
                                               ∂p
        – or else removing the attribute of the implication (he/she accepts it is not sensitive).



                    Fig. 3. Sensitivity test with background knowledge


Regarding to the existence of an object for (∗∗), if the user does not know one,
but believes that it really exists, a model search program may be used to give
an anonymous object. Test (∗∗) can be fairly translated into algebraic terms.


6     Attribute exploration with new attributes
Now we propose how to extend the context by adding new attributes. Formally,
one starts with M0 , a subcontext with partial set of attributes,

                              M0 = (H, A0 , I ∩ (H × A0 )),               with A0 ( A

Assume that, at some stage, full extents for a set H 0 of objects are introduced,
with the aim of expanding the new attributes to initial objects of M0 (see fig.
5). The user only knows -about the new attributes- a background knowledge
Γ , relating old and new attributes. Since it seems not advisable to add many
arguments at once (to facilitate the answers of tests), Γ will be relatively small.
    It is important to observe that ∆ = L ∪ Γ , where L is the partial set of
implications, may be inconsistent with ontological commitments implicitely or
unconsciously accepted for the old attributes; that is, it may be false for M0 ,
whenever the extents of H were expanded to the full attribute set. Thus one
needs an expandability test for objects of H (to simplify assume that the new
attributes are {pk+1 , . . . , pn }):

    For each o ≡ (δ1 , . . . , δk ) of H, is there (δk+1 , . . . δn ) ∈ {0, 1}n−k such that
                   {pj : δj = 1 ∧ j ∈ {1, . . . , n}} respects ∆?

Theorem 6. Let M be an expansion of M0 to the complete attribute set, with
the same set of objects. If Γ is a stem basis (respectively a background knowl-
                           c
edge) for M , then ∂{p       k+1 ,...,pn }
                                           [Γ ] is a complete set of implications (respectively
∂{pk+1 ,...,pn } [Γ ] is a background knowledge) for M0 .
Proof. Assume that Γ is a stem basis. Let L be an implication in the language
A \ {pk+1 , . . . , pn }. If M |= L, then Γ |= L. By corollary 1, ∂p [Γ ] |= L so by
lemma 1, ∂pc [Γ ] |= L holds. If Γ is a background knowledge, the result is a
straightforward consequence of corollary 1.
           Extending Attribute Exploration by Means of Boolean Derivatives                    129



 (3) Expansion test: If implication has not been discarded, test whether the set of implications
     plus background knowledge is extendable to H.
       – If it is extendable, the user shall proceed:
            • Confirming the suggested implication, or
            • giving o ∈ O such that {o}0 does not respect the implication. This is added to H 0 ,
               and the implication is discarded.
       – Else, it must revise the background knowledge, or to discard the implication



               Fig. 4. Aditional step for exploration with new attributes

       Need                              Need water Aquatic Mobility Legs Land
             Aquatic Mobility Legs Cat       1        0       1       1     ?
       water                       Leech     1        1       1       0     ?
  Cat    1     0       1       1   Frog      1        1       1       1     ?
 Leech   1     1       1       0   Maize     1        0       0       0     ?
 Frog    1     1       1       1   Fish      1        1       1       0     0
 Maize 1       0       0       0    Dog      1        0       1       1     1
 Fish    1     1       1       0   Bean      1        0       0       0     1

         Fig. 5. Extension of the context on live beings with new attributes


Corollary 2. Let ∆ ⊆ P F orm. The following conditions are equivalent:
(1) Every extension of objects of H can be expanded to the full attribute set,
consistently with ∆.
(2) {1 + π(F ) : F ∈ ∂{pk+1 ,...,pn } [∆]} ⊆ (aH ) + I2


   Assume now that it has previously certified that ∆ is expandable to objects
of H, and let L be a new implication. If ∆ ∪ {L} can be consistently extended
to H, but the user thinks that it is not true, in a first stage the user is required
to give a counterexample for L by completing the extention of some object of
H (in this way it bounds the set of new examples), or, if he/she does not know
which, a new example. Summarizing, the new step is shown in figure 4.
   For example, suppose that we decide to add a new attribute, to live in land
(La). Some complete extensions are given (figure 5). We only know as back-
ground knowledge that aquatic live beings do not live in land, and we consider
the implication every live being with legs and mobility lives in land, that is

                            ∆ = {A → ¬La, Le ∧ M o → La}

In this case, π[∆] = {1 + xA xLa , 1 + xLe xM o + xLe xM o xLa }.
    The set H can not be consistently expanded to a model of ∆, because
∂{xLa } [π[∆]] = {1 + xA + xA xLe xM , 1} and xA + xA xLe xM ∈ / (aH ) + I2 .


6.1 A final remark: defining the new attributes
We now see how to extend the above procedure for learning the new attribute.
We suppose we have a stem basis consistent with old information; and, in a
second stage, we wish to find a definition of the new attribute w.r.t the old ones
130     José Antonio Alonso-Jiménez, Gonzalo A. Aranda-Corral, Joaquı́n Borrego-
        Dı́az, M. Magdalena Fernández-Lebrón, M. José Hidalgo-Doblado

(if the user thinks it is possible). The next theorem states a solution, which is
an adaptation of predicate completion procedure (sect. 6.2 in [5]). That is, we
are considering the stem basis is a complete knowledge base for the attribute.
Theorem 7. Let M0 as in section 6 with A0 = A \ {p}. Assume that L is a
stem basis, built by attribute exploration with expansion test. Let

               Ω = {Y1 ⊆ A0 : there exists Y ⊆ A0 s.t. Y1 → Y ∪ {p} ∈ L}

If Mc is the expansion of M0 to A by defining the intent w.r.t. {p} by
                                           _ ^
                      p ∈ {o}0 ⇐⇒ vo (            Y)=1
                                                 Y ∈Ω

then L is a stem basis for Mc .
                      W      V
   Since Mc |= p ↔ Y ∈Ω Y , Mc is model of completion formula for p. Thus,
the intent of each object o is expanded to pn by the polynomial
                                 _ ^
                  vo (pn ) := π(      Y )(vo (p1 ), . . . , vo (pn−1 ))
                                  Y ∈Ω



7     An application: discovering tree notion in graph theory
We shall investigate the relationship among several properties on graphs (with
three or more nodes), comparing stem basis produced by classical attribute ex-
ploration with the result of the new methods. The properties are: acyclic, con-
nected, 2-connected (if one edge of the graph is removed, the induced subgraph
is connected), geodetic (for every two nodes there exists only one shortest path),
bipartite (it can be partionated the set of nodes in two sets such that every edge
joins a node of each set), nonseparable (connected and, if one removes a node,
the resulting graph remains connected), and planar.
    We begun (classical) attribute exploration with the two first objects of figure
6 (left). For this we used ConExp, and the result is the formal context of fig. 6
(left. K5 is the complete graph with five nodes, and K33 is the complete bipartite
graph with two sets of tree nodes each one as partition). The stem basis is:

   L1 : t → a, b, c, g, p L2 : n → c, d       L3 : g → c    L4 : d → c
   L5 : b, c, g, p → a, t L6 : b, c, d, g → n L7 : a → b, p L8 : a, c, b, p → g, t
One might apply completion procedure on tree, obtaining a (messy) definition,
(Bipartite ∧ Connected ∧ Geodetic ∧ Planar) ∨ (Acyclic ∧ Connected ∧ Bipar-
tite ∧ Planar)
    Even it is not evident that the first conjuction defines a tree; it is necessary
to know the fact that every geodetic and bipartite graph is acyclic. For this
context, the ideal generated is
    JM = (g + b + t + 1, c + d + t, a + d + 1, t2 + t, pt + t, nt, bt + t, dt, p2 + p, np +
n + p + 1, dp + d + p + 1, n2 + n, dn + n, b2 + b, db + d + b + 1, d2 + d).
    The first interesting sensitivity analysis is on L5 (π(L5 ) = 1 + bcgp(1 + at)):
           Extending Attribute Exploration by Means of Boolean Derivatives                                                                                                           131




                                                                                                                       Nonseparable (n)




                                                                                                                                                                  rad.−minimal (r)
                                                                      2−connected (d)
                                                      Connected (c)


                                                                                        Geodetic (g)
                                                                                                       Bipartite (b)
                                        Acyclic (a)




                                                                                                                                          Planar (p)
                                                                                                                                                       Tree (t)
                                          1 1 0                                             1            1 0                                1 1                      1

                                        0               1                1 0                             1 1                                1 0                   0

                                K       0               1                1 1 0                                            1                 0 0                   0
                                    5


                                          1 0 0                                           0              1                0 1 0                                   0

                                        0               1                1 1                             0 1                                1 0                      0

                                K
                                    33 0                1                1 0                             1 1 0                                         0             0


                                        0               1                1                  0 1                           0 1 0                                      0

                                        0               1                1 1                             0 0                                1 0                   0



  Fig. 6. Formal context on graphs, and the extension obtained for radius-minimal


    ∂
 – ∂b π(L5 ) ∈
             / JM , thus is sensitive in b, hence preserve the implication.
    ∂
 – ∂c π(L5 ) ∈ JM , hence is not sensitive (in this case, we see that g → c holds
   in graphs), hence we redefine L5 := b, g → a, t.
    ∂
 – ∂g π(L5 ) ∈
             / JM , thus it is sensitive in g, so preserve g finishing the analysis.

Other cases
          (L6 and L8 ) are similarly treated. The resultant is
           t → a, c, g, b, p n → c, d        g→c               d→c
    L=
           b, g → a, t, p     a → b, p       a, c → g, t       d, t → n
    The completion of tree from this basis is
           T ree ↔ (Bipartite ∧ Geodetic) ∨ (Acyclic ∧ Connected)
    It easy to see that the first conjunction is equivalent to the second one, the
original definition of tree.
    Our next aim is to expand our set of attributes with a new one, radius-
minimal (denoted as variable by r). The distance of two nodes of a graph is
the length of a shortest path between them. The eccentricity of a node v is
the distance to a node farthest from v. The radius of a graph G, r(G), is the
minimum eccentricity of the nodes. Finally, a graph is called radius-minimal if
r(G − e) > r(G) for every edge in G. We used the method shown in section 6;
the objects of fig. 6 suffices for it.
    The exploration starts with the first two objects of figure 6, knowing that
the first one is radius-minimal and the second one is not. Also we have the
          knowledge {¬c → ¬r}. The procedure gives the basis
background
          L1 : r → a, c, g, b, p, t L2 : t → a, c, g, b, p, r L3 : n → c, d
    L = L4 : g → c                   L5 : d → c                L6 : c, g, b → a, p, t, r
           L7 : a → b, p             L8 : a, c, b, p → g, t, r L9 : a, c, d, g, b, p, t, r → n
         
    After testing sensitivity, three implications are refined, producing:

                L6 : g, b → a, p, t, r L8 : a, c → g, t, r L9 : d, r → n
132     José Antonio Alonso-Jiménez, Gonzalo A. Aranda-Corral, Joaquı́n Borrego-
        Dı́az, M. Magdalena Fernández-Lebrón, M. José Hidalgo-Doblado

and the rest remains. Thus completion for r is
   Radius-minimal ↔ Tree ∨ (Geodetic ∧ Bipartite) ∨ (Acyclic ∧ Connected)
   The last two conjunctions are equivalent to Tree, so we conclude that Radius-
minimal and Tree are equivalent. Actually, this result is proved in [6]. Thus we
take vo (r) := t to extend the attribute r for objects.

8     Conclusions and Future work
We present a framework for solving problems of FCA with the assistance of a
CAS. We are confident that the tools described here may be useful to facilitate
knowledge processing. As mentioned in previous sections, the complexity of some
subproblems involved in the improvements of attribute exploration may restrict
the method to projects of modest size, if a CAS as CoCoA is not used.
    Throughout the paper we remarked some works related with the tools used
here. The future work is the extension to many-valued logics and their applica-
tions [9].

References
 1. Amstrong, W. W.. Dependency structures of data base relationships. Proc. of IFIP
    Congress, Geneva, 580-583 (1974).
 2. Borrego-Dı́az, J., Fernández-Lebrón, M.: Theoretical foundations of a specialised
    polynomial-based calculus for computing conservative retractions in propositional
    logic, to appear (2008).
 3. Ganter, B., Wille, R.: Formal Concepts Analysis. Mathematical Foundations
    Springer, Berlin (1999).
 4. Ganter, B., Attribute exploration with background knowledge. Theoretical Com-
    puter Science 217 215–233 (1999).
 5. Genesereth, M., Nilsson N., Logical Foundations of Artificial Intelligence. Morgan
    Kaufmann, Los Altos (1987).
 6. Gliviak, F.: On radially critical graphs in Recents Advances in Graph Theory, Proc.
    Sympos. Academia Praha, 207–221 (1975),
 7. Guigues, J.-L., Duquenne, V.: Familles minimales d’ implications informatives re-
    sultant d’un tableau de donnees binaires. Math. Sci. Humaines 95, 5–18 (1986).
 8. Kapur, D., Narendran, P., An equational approach to theorem proving in first-order
    predicate calculus, Proc. 9 Int. Joint Conf. on Artificial Intelligence (IJCAI’85),
    1146-1153.
 9. Laita, L. M., Roanes-Lozano, E., de Ledesma, L., Alonso-Jiménez, J. A.: A com-
    puter algebra approach to verification and deduction in many-valued knowledge
    systems. Soft Computing 3, 7–19 (1999).
10. Taouil, R., Bastide, Y.. Computing Proper Implications. Proc. of Workshop on
    Concept Lattices-based Theory, Methods and Tools for Knowledge Discovery in
    Databases (E. Mephu et al. eds.), 49-61 (2001).