=Paper= {{Paper |id=None |storemode=property |title=An Implementation for Fault Tolerance and Experimental Results |pdfUrl=https://ceur-ws.org/Vol-1040/paper3.pdf |volume=Vol-1040 |dblpUrl=https://dblp.org/rec/conf/cubist/Dau13 }} ==An Implementation for Fault Tolerance and Experimental Results== https://ceur-ws.org/Vol-1040/paper3.pdf
                An Implementation for Fault Tolerance
                     and Experimental Results

                                    Frithjof Dau, SAP AG



         Abstract. Even for small or medium sized contexts, the corresponding concept
         lattice soon becomes too large to be nicely displayed and understood, which
         renders a naive application of FCA for visual analytics challenging. Concept
         lattices depict all information of the underlying formal context. In certain set-
         tings, one can consider the context to be noisy or incomplete, and moderate and
         meaningful changes of the context such that the corresponding lattices signifi-
         cantly shrink in size and complexity are in those settings permissible. In this
         paper, such an approach is taken. The formal definitions for the approach are
         given, and using an implementation, several examples are provided which show
         the usefulness of the approach.


1        Introduction

CUBIST1 is an EU funded research project with an approach that leverages BI to a
new level of precise, meaningful and user-friendly analytics of data by following a
best-of-breed approach that combines essential features of Semantic Technologies,
Business Intelligence and FCA-based Visual Analytics. A main goal of CUBIST is
the provision of novel Visual Analytics based on meaningful diagrammatic represen-
tations. The Visual Analytics part of CUBIST is complementing traditional BI-means
by utilizing Formal Concept Analysis (FCA) for analyzing the data in a triple store:
whereas traditional BI focuses on the visualization and analysis of numerical, quanti-
tative data (“show me the numbers”). FCA is a well-known theory of data analysis
which allows objects to be conceptually clustered with respect to a given set of attrib-
utes and then visualize the (lattice-ordered) set of clusters, e.g. by means of Hasse-
diagrams, thus is best suited for visualization and analysis of structural dependencies
in the data.
The starting point of FCA is a so-called formal context (O,A,I) consisting of a set O of
formal objects, a set A of formal attributes, and an incidence-relation M O × A
between the formal objects and attributes. The corresponding concept lattice preserves
all the information of the formal context, and it might grow exponential in size com-
pared to O and A (one might have 2max{|O|,|A|} many formal concepts). Hence even for
data sets of moderate size, the corresponding lattices become very soon too complex
to be visualized and thus incomprehensible. This is a well-known problem of FCA
and has already been addressed with different approaches. Iceberg-lattices, introduced
by Gerd Stumme, show only the topmost part of a concept lattice, thus reducing the

1
    www.cubist-project.eu
number of concepts shown to the user [8,9]. It is not only the number of concepts
which often render the visualization of lattices complicated: Even with good layout
algorithms, the Hasse-diagram of a lattice often contains a significant number of
crossing edges, which is the main problem for the visual comprehension of graphs.
Cassio Melo et al. present in [4,5] different metrics which allow for each node in a
concept lattice to choose a uniquely given upper neighbor. By doing so, one can trans-
form a lattice into a tree (with loss of information), which in turn can be better dis-
played compared to lattices. Finally, Boulicaut et al consider in [2,7] “noisy” formal
contexts and directly change the incidence relation I in order to reduce the number of
formal concepts. For CUBIST, in line with the approach of Boulicaut et al, Simon
Andrews has provided in [1] an example with real-data of one the CUBIST use cases
which clearly shows the advantage of this approach.
In this paper, we follow up the notion of considering contexts to be noisy and to ex-
tend the incidence relation in order to simplify the derived concept lattices. In the next
section, we provide the mathematical definition on how we change the incidence rela-
tion. As described in the following section, this approach has been implemented into a
small FCA-tool in order to show its effectiveness. A thorough, though artificial ex-
ample is given in the next section, before we turn to examples based on real data out
of the CUBIST project.
   The authors want to stress that apart from providing the mathematical definitions,
this paper focuses on the implementation and the experimental results. A thorough
mathematical investigation is out of scope of this paper and will we provided in a
different paper which is currently in preparation.


2      Formal Definition

   Let a formal context (O,A,I) be given. Our basic idea is to understand the context to
be noisy or incomplete, and we are aiming at adding crosses to I such that the derived
lattice becomes smaller (hence, easier to read and understand). For an object o O
and an attribute a A, our starting point is the well-known equation

         oIa        oII   aI       aII     oI        oI   a             (1)

   That is, we do not have oIa if some objects and/or some attributes violate (1). We
define the sets of those objects and attributes as follows:

         DiffObj(o,a) := { x   O|x       oII and x    aI } and
         DiffAtt(o,a) := { y   A|y       aII and y    oI }

    The straight-forward approach taken in this paper, is, roughly speaking, as follows:
the bigger DiffObj(o,a) or DiffAtt(o,a), the less we like to add an additional cross be-
tween o and a. In order to do so, it is possible to take only DiffObj(o,a) into account
(this approach is reasonable in contexts having a small amount of attributes, but a
large amount of objects, which is standard in traditional BI settings), or only Dif-
fAtt(o,a), or both. Moreover, we can either relate DiffObj(o,a) and DiffAtt(o,a) to the
overall sets of objects and attributes, i.e. taking a global approach, or relate them only
to the concepts generated by o and a, i.e. taking a local approach. This gives rise to
six different ways to measure “an approximate incidence measure” between o and a
as follows:

 GObj(o,a)         :=     1-(            |DiffObj(o,a)/|O|                               )
 GAtt(o,a)         :=     1-(                                           |DiffAtt(o,a)|/|A|
                                                                                         )
 GObj,Att(o,a)     :=     1-1/2 (        |DiffObj(o,a)|/|O|           + |DiffAtt(o,a)|/|A|
                                                                                         )
 LObj(o,a)         :=     1-(            |DiffObj(o,a)|/| oII |                          )
 LAtt(o,a)         :=     1-(                                     |DiffAtt(o,a)|/| aII | )
 LObj,Att(o,a)     :=     1-1/2 (        |DiffObj(o,a)|/| oII | + |DiffAtt(o,a)|/ | aII| )
  Fig 1: Six approaches to measure the incidence between objects and attributes

  For any of these incidence measures S                        { GObj , GAtt , GObj,Att , LObj , LAtt , LObj,Att }
we obviously have for all o,a:

            0    S(o,a)    1 and S(o,a) = 1                   oIa           (2)

  So, for a given threshold t with 0              t       1 we can naturally set

            JS,t := { (o,a) | S(o,a)         t}                            (3)

   Due to (2), we obviously have s t JS,t JS,s and JS,1 = I
   .Finally, the local measure for objects corresponds to the confidence of the accord-
ing association rule, which is a nice rationale for that measure:

                                { } |                         { } |                (   )
        (          )                    =1                            =1                   =    (     )
                            |    |                    |   |                    |   |


   We will now exemplify (3) for S := GObj (and LObj(o,a). which is for this example
the same relation) with the following simple context:




  Please note the following:
      For o10, we have o10I a3 and o10I a1 , but not o10I a2 . This implication is
      violated by exactly one object, namely o10 itself. If we consider a threshold of
      0.9 (that is, as we have 10 objects in total, we set a cross between o and a iff
      the condition oI        a is violated by at most one object), then we should add a
      cross between o10 and a2. Note that the high amount of objects which have a1
      as attribute (i.e. o3 - o9) is not relevant.
      For o1, we have o1I         a3 and o1I     a2 , but not o1I a1 . This implication is
      violated by exactly two objects, namely o1 itself, and o2. If we consider a
       threshold of 0.8 (that is, now we set a cross between o and a iff the condition
       oI    a is violated by two or less objects), then we should add a cross between
       o1 and a1
       A similar consideration applies of course to o2 .
  The approximate incidence measure S:=GObj and the concept lattices for the inci-
dence relations I=JS,1 , JS,0.9 and JS,0.8 is depicted in the next figure.




     S := GObj             Threshold 1            Threshold 0.9          Threshold 0.8

  Fig 2: A simple example for GOBJ

   Generally, adding crosses to an incidence relation can decrease or increase the
number of formal concepts, and one can hardly make statements on how the concept
lattices change. For the herein defined extensions of the incidence relations, the situa-
tion is different. We start with a simple result which does not generally apply to ex-
tensions of incidence relations.

   Lemma: Let (O,A,I) be a formal context, let S { GObj , GAtt , GObj,Att , LObj , LAtt ,
LObj,Att } be an approximate incidence measure, let t with 0 t 1 be a threshold, and
let J:= JS,t . Then, for all o1,o2 O; we have o1I o2I       o1J o2J . Similarly, for all
                        I       I       J    J.
a1,a2 A we have a1           a2      a1   a2
   Proof: We only show the lemma for objects (i.e. the first implication), the proof
for attributes is done analogously.
   Let a A be an attribute . As we have o1I o2I and hence o2II o1II , we obtain Dif-
fObj(o2,a) DiffObj(o1,a) and DiffAtt(o2,a) DiffAtt(o1,a); thus S(o1,a) S(o2,a) . Par-
ticularly, we get o1 J a          S(o1,a) t => S(o2,a) t     o2Ja.     Q.e.d.

   As stated in the introduction, a thorough investigation of the approximate incidence
relations will be the subject of a different paper. Here we only provide a remark that
even though we can prove statements about the relationship between I and JS,t which
do not hold for I and any extension J I , the dependencies between I and JS,t have to
be investigated further. For example, the author conjectured that we have PJ = PIIJ for
all sets of objects P O, but the context in Fig 3, together with the approximate inci-
dence measure GObj(o,a), shows that this equation does not generally hold. To see
this, consider a threshold t=0.8, P:={o0,o1,o2}: we then have PIIJ = {o1, o2, o3, o4 } and
PJ={o1, o2, o3, o4, o6, o7, o8 }
    Fig 3: Counterexample for PJ = PIIJ


3       Implementation

   We have extended the tool presented in [3] in order to implement the approach pre-
sented in this paper. The tool, initially developed to transform SPARQL-queries into
formal contexts, can now load formal contexts (independently from SPARQL) and
transform them according to (3). A partial screenshot is provided below.




    Fig 4: screenshot of a tool which implements approximate incidence relations

  The tool allows to load a formal context (as a Burmeister file) and computes all six
approximate incidence measures. Either the original context or one of the measures
can be selected for display, as the next screenshot shows.




                     Fig 5: The different metrics as the appear in the tool

    The slider allows to adjust the threshold t . A cell for an object o and an attribute a
is marked grey iff S(o,a) 1 (where S stands for the selected measure). One can now
either store only the single derived context with the incidence relation J S,t , or a set of
derived contexts, where all thresholds 0.9, 0.91, …, 0.99 (these thresholds are manu-
ally chosen and so far hardcoded) and all six measures are used (thus, 60 Burmeister
files are stored).
4                   An thorough, artificial example

  In this section, we exemplify all measures and different thresholds with an artificial
example. Let us consider the following formal context and its concept lattice:




                Fig 6: Example context and lattice for approximate incidence relations

                For this context, we obtain the following values for the six different measures:

                                    Global                                      local
 Objects
Attributes
objs and atts




                Fig 7: All six approximate incidence measures for the example

   For the thresholds 0.9 (upper half of table) and 0.8 (lower half of table), the next
figure provides for each measure S the corresponding concept lattice.
                            Objects                  Attributes             Objects and attributes
                 Global
Threshold: 0.9

                 Local
                 Global
Threshold: 0.8

                 Local




          Fig 8: Concept lattices for thresholds 0.8 and 0.9 and all six measures
5      Heriot-Watt University Example

The example of this section is based on the Heriot-Watt University use case in the
research project CUBIST, in particular, it is built upon a real data set, namely -
embryo gene expression data. Gene expression information describes whether or not a
gene is expressed (active) in a location. There is a fixed set of levels of expressive-
ness: A gene can in some location be detected, or more one can more precisely state
that it is weakly, moderate or strongly detected. Moreover, it can be stated that a gene
is not detected, that it is possible for the gene to be expressed, or that it is not exam-
ined. Based on public available data, we have generated a context where the objects
are (names of) mouse genes (we have a total of 6613 formal objects in the context),
the attributes are the seven different levels of expressiveness, and a cross is set be-
tween a gene and a level if that gene is detected in some location with that level of
expressiveness. The resulting lattice, having 81 formal concepts, is depicted below.




             Fig 9: Genes and Levels of Expressiveness without approximation

   The lattice clearly shows a well-known challenge for FCA. Most of possible com-
binations of attributes are indeed instantiated objects in the context, often by few only,
(the clarified context still contains 77 objects), which leads to an explosion of the
number of concepts. Amongst the concepts, we have some formal concepts with large
number of objects, but most concepts are rather small and render the lattice clutterd
and hard to read. As we have many objects and only few attributes, it is sensitive to
apply the global object-based measurement GObj to the context. With a (quite high)
threshold of 97 already, the concept lattice is reduced significantly, as shown in Fig
10.
       Fig 10: Genes and Levels of Expressiveness with approximation, threshold 0.97



6      Summary and next steps

   In this paper, we have introduced a fault tolerance approach for FCA and shown
promising experimental results.

   Of course, first of all, this approach has to be thoroughly investigated from a for-
mal point of view. It has to be proven that for each approximate incidence measure,
the number of formal concepts decreases when the threshold is decreased. More spe-
cifically, it has to be investigated how for a given measure and two thresholds t1 t2
how the concept lattice for t1 can be mapped onto the concept lattice for t2 . It has to be
scrutinized how the different measures relate to each other. And it has to be investi-
gated how the approach taken in this paper relates to other research, most importantly
the work of Boulicaut et al.
   From a conceptual point of view, it has to be worked out on how derived concept
lattices are to be understood, particularly how the relationship of derived lattices and
association rules of the orgin lattice is.
   Next, from an implementational point of view, we have to elaborate on the algo-
rithm which computes the measures, i.e. scrutinizing its complexity and possibly im-
proving its performance (this is e.g. important for big data sets and real-time applica-
tions).
   Finally, from an user interface point of view, it would be desirable to have a lattice
visualization with a slider to adjust the threshold for a given measure such that adjust-
ing the threshold with the slider leads to animated transitions between the derived
lattices. Here the mappings which have been mentioned in the section about the for-
mal investigations are likely to be helpful, and further mathematical investigations
might be needed.
   In the long run, assuming that the steps mentioned above have been undertaken,
the approach taken in this paper is envisioned to bring FCA closer to business intelli-
gence applications and visual analytics, where one needs easy-to-understand and in-
teractive visualizations for large amounts of data, and where a certain and controlled
loss of information for the sake of simplified diagrams is permissible.

   Acknowledgement This work is part of the CUBIST project (“Combining and Uniting
Business Intelligence with Semantic Technologies”), funded by the European Commission’s
7th Framework Programme of ICT, under topic 4.3:Intelligent Information Management.


7      References
 1. Andrews, S. and McLeod, K.: Gene Co-Expression in Mouse Embryo Tissues. In: Dau, F.
    (ed.) 1st CUBIST Workshop, at ICCS 2011, Derby, UK. CEUR Workshop Proceedings,
    Vol. 753, pp. 1-10. ISSN: 1613-0073
 2. Besson, J., Robardet, C., Boulicaut, J.F.: Mining a New Fault-Tolerant Pattern Type as an
    Alternative to Formal Concept Discovery. Scharfe et al. ICCS. LNAI 4065. Springer
    (2006)
 3. Dau, F.: Towards Scalingless Generation of Formal Contexts from an Ontology in a Triple
    Store In: Dau, F and Andrews, S: Proceedings of the second CUBIST workshop 2012.
    KULeuven press, 2012
 4. Melo, C.A., Aufaure, M.-A., Le Grand, B. and Bezerianos, A.: Extracting and Visualizing
    Tree-like Structures from Concept Lattices In: 15th International Conference on Infor-
    mation Visualization (IV 2011). London, UK, 2011.
 5. Melo, C.A., Aufaure, M.-A., Bezerianos, A. and Le Grand, B.: Cubix: A Visual Analytics
    Tool for Formal Concept Analysis. In: 23ième Conférence Francophone Sur l’IHM (IHM
    2011) – Demo. Sophia-Antipolis, France, 2011.
 6. Pensa, R. G., Boulicaut, J-F.: Towards Fault-Tolerant Formal Concept Analysis. In: Bani-
    dini, S., Manzoni, S. (eds.) AI*IA 2005, LNAI 3673, pp. 212{223, Springer-Verlag, Berlin
    Heidelberg (2005
 7. Pensa, G.R:, Boulicaut, J.F.: Towards fault-tolerant formal concept analysis. In: Congress
    of the Italian Association for Artificial Intelligence AI* IA, 2005
 8. Stumme, G., Taouil, R., Bastide, Y., Pasquier, N., Lakhal, L.: Computing iceberg concept
    lattices with Titanic. Data & Knowledge Engineering, Volume 42, Issue 2, August 2002,
    Pages 189–222
 9. Stumme, G., Taouil, R., Bastide, Y.: Conceptual clustering with iceberg concept lattices.
    In: Proc. of GI-Fachgruppentreffen Maschinelles Lernen'01, Universität Dortmund, 2001