<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Computing Functional Dependencies with Pattern Structures</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Jaume Baixeries</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Mehdi Kaytoue</string-name>
          <email>mehdi.kaytoue@insa-lyon.fr</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Amedeo Napoli</string-name>
          <email>amedeo.napoli@loria.fr</email>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Departament de Llenguatges i Sistemes Informa`tics. Universitat Polit`ecnica de Catalunya.</institution>
          <addr-line>08032 Barcelona. Catalonia</addr-line>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Universit ́e de Lyon. CNRS</institution>
          ,
          <addr-line>INSA-Lyon, LIRIS. UMR5205, F-69621</addr-line>
          ,
          <country country="FR">France</country>
        </aff>
      </contrib-group>
      <fpage>175</fpage>
      <lpage>186</lpage>
      <abstract>
        <p>The treatment of many-valued data with FCA has been achieved by means of scaling. This method has some drawbacks, since the size of the resulting formal contexts depends usually on the number of different values that are present in a table, which can be very large. Pattern structures have been proved to deal with many-valued data, offering a viable and sound alternative to scaling in order to represent and analyze sets of many-valued data with FCA. Functional dependencies have already been dealt with FCA using the binarization of a table, that is, creating a formal context out of a set of data. Unfortunately, although this method is standard and simple, it has an important drawback, which is the fact that the resulting context is quadratic in number of objects w.r.t. the original set of data. In this paper, we examine how we can extract the functional dependencies that hold in a set of data using pattern structures. This allows to build an equivalent concept lattice avoiding the step of binarization, and thus comes with better concept representation and computation.</p>
      </abstract>
      <kwd-group>
        <kwd>Association rules and data dependencies</kwd>
        <kwd>attribute implications</kwd>
        <kwd>data dependencies</kwd>
        <kwd>pattern structures</kwd>
        <kwd>formal concept analysis</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction and Motivation</title>
      <p>In the relational database model there are different types of dependencies ([1,18]).
Functional dependencies are among the most popular. The reason is that they
are important in order to explain the normalization of a database scheme in the
Relational Database Model. Functional Dependencies (FD’s) have their own set
of axioms ([5,18]), which, in turn, are also shared by other dependencies. For
instance, implications share the same axioms as functional dependencies ([2]),
which are basically reflexivity, augmentation and transitivity.</p>
      <p>These axioms state how functional dependencies behave in the presence of a
set of dependencies of the same kind. For instance, we can decide whether a set
of functional dependencies Σ implies a single FD σ, that is, Σ |= σ
c 2012 by the paper authors. CLA 2012, pp. 175–186. Copying permitted only for
private and academic purposes. Volume published and copyrighted by its editors.
Local Proceedings in ISBN 978–84–695–5252–0,
Universidad de M´alaga (Dept. Matem´atica Aplicada), Spain.</p>
      <p>We can also find a minimal set of functional dependencies that implies a
given set of them, that is, we can compute Σ′ such that Σ′ |= Σ, where Σ′ is
minimal. In this case, we also say that Σ′ is the minimal generating set of Σ.
These two problems have been studied in [1] and [16], and algorithms have been
proposed. Yet, it is important to note that this calculation is performed starting
from a set of dependencies, not a set of data.</p>
      <p>In this paper, we aim at finding a characterization of functional dependencies
that hold in a set of data using Formal Concept Analysis and pattern structures.</p>
      <p>The lattice characterization of a set of Functional Dependencies is studied in
[6,7,8,9], and the characterization with a formal context in [3,12]. This
characterization is based on a binarization, which is the transformation of the original
set of data into a binary context.</p>
      <p>In fact, the primary concern when computing the characterization of a set of
functional dependencies with FCA is that, generally, the dataset is many-valued,
and not binary. This means that the set of data must be somehow transformed
to obtain a binary context.</p>
      <p>Applying conceptual scaling (without information loss) results either in a
larger set of objects in the resulting formal context, or a larger set of attributes.</p>
      <p>On another hand, pattern structures ([11,14]) have emerged as a valid
alternative to work with non binary contexts and specially with numerical contexts,
as well as to avoid the complexity drawbacks that are present in scaling.</p>
      <p>Therefore, we have two different methods of computing the characterization
of a set of functional dependencies that hold in a set of data:
1. Binarizing or scaling the original set of data, and obtaining a formal context.
2. Using pattern structures.</p>
      <p>The purpose of this paper is twofold. On the one hand, we propose using
pattern structures as a way to compute the characterization of a set of functional
dependencies that hold in a set of data. The interest is to prove that pattern
structures is a flexible mechanism that may encode the semantics of functional
dependencies without adding further penalty to the resulting formal context. On
the other hand, we aim at setting up a solid connection between the formalism of
pattern structures and the finding of other different kinds of dependencies that
may hold in a given set of data.</p>
      <p>The paper is organized as follows. Firstly, the definitions of functional
dependencies and their axioms are explained in Section 2 together with the scaling
procedure allowing one to derive a formal context from a numerical dataset that
characterize FD. Then, Section 3 presents the general formalism of pattern
structures. Section 4 gives the core of this article: it shows how to define a pattern
structure that holds the same concept lattice than with the introduced scaling. It
follows experiments in Section 5 showing the interest of using pattern structures.
Finally, the conclusion draws attention to perspectives of research.</p>
    </sec>
    <sec id="sec-2">
      <title>Motivating Example</title>
      <p>This example shows how functional dependencies can be extracted using FCA
(this is based on [4]). The main idea behind this method is called binarization,
and consists in transforming (implicitly) a many-valued set of data into a binary
context. This transformation allows us to build a formal context.</p>
      <p>Before explaining this process, we first introduce functional dependencies
(FD’s). Let U be a set of attributes, and let Dom be a set of values (a domain).
For the sake of clarity, we assume that Dom is a numerical set. A tuple t is
a function t : U 7→ Dom, and a table T is a set of tuples. Usually tables are
presented as a matrix, as in the following example:</p>
      <p>It corresponds to the set of all pairs of tuples from T (excluding symmetry
and reflexivity). The relation of the context is defined as:</p>
      <p>Pattern Structures in Formal Concept Analysis
We assume the reader to be familiar with basic notions of formal concept
analysis. We use the standard notations from [12]. Our interest lies in handling
numerical data within FCA. Hence, we recall here the formalism of pattern structures
that can be understood as a generalization towards complex data, i.e. objects
taking descriptions in a partially ordered set.</p>
      <p>A pattern structure is defined as a generalization of a formal context
describing complex data [11]. Formally, let G be a set of objects, let (D, ⊓) be a
meet-semi-lattice of potential object descriptions and let δ : G −→ D be a
mapping associating each object with its description. Then (G, (D, ⊓), δ) is a pattern
structure. Elements of D are patterns and are ordered by a subsumption relation
⊑: ∀c, d ∈ D, c ⊑ d ⇐⇒ c ⊓ d = c. A pattern structure (G, (D, ⊓), δ) gives rise
to two derivation operators (·) :</p>
      <p>A
= l δ(g)
g∈A</p>
      <p>f or A ⊆ G
d
= {g ∈ G|d ⊑ δ(g)}
f or d ∈ (D, ⊓).</p>
      <p>These operators form a Galois connection between (2G, ⊆) and (D, ⊓). Pattern
concepts of (G, (D, ⊓), δ) are pairs of the form (A, d), A ⊆ G, d ∈ (D, ⊓), such
that A = d and A = d . For a pattern concept (A, d), d is a pattern intent
and is the common description of all objects in A, the pattern extent. When
partially ordered by (A1, d1) ≤ (A2, d2) ⇔ A1 ⊆ A2 (⇔ d2 ⊑ d1), the set
of all concepts forms a complete lattice called pattern concept lattice. Existing
FCA algorithms [15] can be used with slight modifications to compute pattern
structures, in order to extract and classify concepts (details in [11,14]).
4</p>
      <p>Finding FD with Partition Pattern Structures
As introduced in Section 2, an existing binarization allows to build a concept
lattice from which a set of FDs can be characterized [12]. However, this formal
context tends to be very large, even when the initial data are of reasonable size.
We show here how the formalism of pattern structures can be instantiated to
obtain an equivalent concept lattice. The so called partition pattern structures
come with several advantages among which computation and interpretation of
the resulting lattice.
4.1</p>
      <sec id="sec-2-1">
        <title>Preliminaries on the partition lattice</title>
        <p>Partition of a set. Given a set E, a partition over E is a set P ⊆ ℘(E) s.t.:
– S pi = E</p>
        <p>
          pi∈P
– pi ∩ pj = ∅, for any pi, pj ∈ P with i 6= j.
In other words, a partition covers E and is composed of disjoint subsets of E.
Equivalence relation. A partition P over a set E is an equivalence relation RP
on E. The 1-1-correspondence between P and RP is given by (e, e′) ∈ RP iff e and
e′ belongs to the same class of P [6,13]. For example, given P = {{1, 2, 3}, {4}},
one has the relation RP = {(
          <xref ref-type="bibr" rid="ref1 ref2">1, 2</xref>
          ), (
          <xref ref-type="bibr" rid="ref1 ref3">1, 3</xref>
          ), (
          <xref ref-type="bibr" rid="ref2 ref3">2, 3</xref>
          ), (
          <xref ref-type="bibr" rid="ref1 ref1">1, 1</xref>
          ), (
          <xref ref-type="bibr" rid="ref2 ref2">2, 2</xref>
          ), (
          <xref ref-type="bibr" rid="ref3 ref3">3, 3</xref>
          ), (
          <xref ref-type="bibr" rid="ref4 ref4">4, 4</xref>
          )}
(omitting symmetry for the sake of readability).
        </p>
        <p>Ordering relation. A partition P1 is finer than a partition P2 (P2 coarser than
P1), written P1 ⊑ P2 if any subset of P1 is a subset of a subset in P2. For
example,</p>
        <p>
          {{1, 3}, {2}, {4}} ⊑ {{1, 2, 3}, {4}}
Meet of two partitions. It is defined as the coarsest common refinement. In
other words, it is the intersection of the respective equivalence relations:
{{1, 3}, {2, 4}} ⊓ {{1, 2, 3}, {4}} = {{1, 3}, {2}, {4}}
or {(
          <xref ref-type="bibr" rid="ref1 ref3">1, 3</xref>
          ), (
          <xref ref-type="bibr" rid="ref2 ref4">2, 4</xref>
          )} ∩ {(
          <xref ref-type="bibr" rid="ref1 ref2">1, 2</xref>
          ), (
          <xref ref-type="bibr" rid="ref1 ref3">1, 3</xref>
          ), (
          <xref ref-type="bibr" rid="ref2 ref3">2, 3</xref>
          )}
Join of two partitions. It is defined as the finest common coarsening. In
other words, it is the transitive closure of the union of the respective equivalence
relations.
        </p>
        <p>
          {{1, 3}, {2}{, 4}} ⊔ {{1, 2}, {3}{4}} = {{1, 2, 3}, {4}}
or transitive closure({(
          <xref ref-type="bibr" rid="ref1 ref3">1, 3</xref>
          ), (
          <xref ref-type="bibr" rid="ref4 ref4">4, 4</xref>
          )} ∪ {(
          <xref ref-type="bibr" rid="ref1 ref2">1, 2</xref>
          )(
          <xref ref-type="bibr" rid="ref3 ref3">3, 3</xref>
          ), (
          <xref ref-type="bibr" rid="ref4 ref4">4, 4</xref>
          )})
        </p>
        <p>Finally, one should notice that the property P1 ⊓ P2 = P1 ⇔ P1 ⊑ P2
naturally holds (and the dual for join). Thus the set of all partitions over a set
forms a lattice (D, ⊓, ⊔) and can be used as a description space of a pattern
structure.
4.2</p>
      </sec>
      <sec id="sec-2-2">
        <title>Partition pattern structure</title>
        <p>
          Consider a numerical table as a many-valued context (G, M, W, I) where G
corresponds to the set of objects (”rows”), M to the set of attributes (”columns”),
W the data domain (”all distinct values of the table”) and I ⊆ G × M × W a
relation such that (g, m, w) ∈ I also written m(g) = w means that attribute m
takes the value w for the object g [12]. In Table 1 (left), we have D(
          <xref ref-type="bibr" rid="ref4">4</xref>
          ) = 8.
        </p>
        <p>We show how a partition pattern structure can be defined from a many-valued
context (G, M, W, I) and show that its concept lattice is equivalent to the concept
lattice obtained after binarization (see Section 2). Intuitively, formal objects of
the pattern structure are the attributes of the numerical dataset. Then, given an
attribute m ∈ M , its description δ(m) is given by a partition over G such that
any two elements g, h of the same class take the same values for the attribute m,
i.e. m(g) = m(h). The result is given in Table 1 (middle). As such, descriptions
Proof. Consider both structures (M, B2(G), I) and (M, (D, ⊓, ⊔), δ). They both
hold the same set of ”formal objects” M (attributes in the many-valued
context). In the pattern structure, elements of M are described by a partition over
G. In the formal context, elements are described by pairs of objects, that is, by
definition, the representation of the same partitions. As such, elements of M are
described in an equivalent way. Furthermore, intersections in both
representations are equivalent too. Indeed, the meet operation between two partitions is
known to be the intersection of their equivalence class representation. Since it is
known that derivation operations are defined in pattern structures in the same
way than in formal contexts, the proposition naturally holds.</p>
        <p>
          Example. The pattern concept ({B}, {{1, 2, 4}, {3}}) is equivalent to the formal
concept ({B}, {(
          <xref ref-type="bibr" rid="ref1 ref2">1, 2</xref>
          ), (
          <xref ref-type="bibr" rid="ref2 ref4">2, 4</xref>
          ), (
          <xref ref-type="bibr" rid="ref1 ref4">1, 4</xref>
          )}).
        </p>
        <p>From this example, one should remark that pattern structures offer more
concise intent representation when the set of object becomes very large, i.e.
storing a partition instead of all pairs of objects that are together in a same
class of the partition. This leads us to the next section, where the attention is
drawn to a computational comparison of both approaches.
5</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Experiments</title>
      <p>We showed how pattern structures can alternatively represent the formal
context (M, B2(G), I) by means of partition patterns. Both concept lattices are
equivalent and thus can be used to characterize FD. To assess the usefulness of
introducing partition pattern structures, we applied both methods to well known
UCI datasets5. To compute with formal contexts, we wrote a simple procedure
to scale the many-valued context into a formal context, and applied the (C++)
closed itemset mining algorithm LCM (version 2 [17]). Whereas this algorithm
only computes concept intents, it is known to be one of the most efficient for
that task. To compute with pattern structures, we turned the many-valued
context into a set of partitions over G (one for each attribute m ∈ M ) and applied
a slight (Java) modification of the algorithm CloseByOne [15]. Indeed, the
latter can be easily adapted by changing the definition of both intersection and
subsumption test, used for closures computation (a detailed explanation for
another instance of pattern structures can be found in [14]). As such, this method
computes pattern concepts, i.e. both pattern extent and intent.</p>
      <p>Table 2 gives the details of the datasets and their derived formal context we
experiment with. Note that in column |B2(G)|, formal objects (g, h) with empty
description, i.e. {(g, h)}′ = ∅ for any g, h ∈ G, are not taken into account. Table 3
gives the execution time of both methods. For pattern structures, execution
times include the reading of the data, their process to a set of partitions and
the CloseByOne execution. Concerning formal contexts, execution times include
data reading and process with LCM, while the time to build the formal context
is not taken into account. In both case, algorithms only output the number of
5 http://archive.ics.uci.edu/ml/datasets.html
patterns. The experiments were carried out on an Intel Core i7 CPU 2.40 Ghz
machine with 5 GB RAM.</p>
      <p>Dataset
(G, M, W, I)
|G| |M |</p>
      <p>(B2(G), M I)
|B2(G)| Avg. |g′|</p>
      <p>From Table 3, it can be observed than computing with formal contexts is
faster for the smallest datasets, even abalone that holds more than 3 millions
of formal objects. However, with bigger datasets, from 20 to 530 millions of
objects, partition pattern structures are the only able to compute the set of
concepts. This holds for 7 numerical attributes already, and is bolder with 15.
It is indeed already known that complexity of computing FD is highly related
with the number of numerical attributes M .</p>
      <p>As already suggested in [14,11] in different settings, the explanation is that
when working with simple descriptions (i.e. vectors of bits), computing an
intersection is more efficient than when working with more complex descriptions.
Indeed, partitions are encoded in our algorithm as vectors of bitvectors (i.e.
partitions) and both intersections or inclusion tests computation require to consider
all pairs of sets between the two partitions in argument. Although we used
optimizations avoiding an exhaustive computation between all pairs (by considering
a lectic order on parts), those operations are more complex than standard
intersections and inclusion tests between sets. However, we need to compute much
less intersections, thus the following trade-off. Pattern structures perform better
with larger datasets. Formal objects (numerical attributes) map to concise
descriptions (partitions) whereas they map with the equivalence class of the same
partitions in the case of formal contexts. Consequently, pattern structures are
preferred to formal contexts when the number of possible pairs of objects that</p>
      <p>CloseByOne</p>
      <p>LCMv2
Dataset Pattern concepts Time (ms) Concept intents Time (ms)</p>
      <p>iris
balance-scale
flare
glass</p>
      <p>crx
abalone
hepatitis
imports85
krkopt-25%
krkopt-50%
krkopt-75%
krkopt-100%
adult-25%
adult-50%
adult-75%
adult-100%
agree for one or more attributes is high (|B2(G)|). Finally, let us recall that
execution times for formal contexts do not include the scaling procedure time, since
such procedure is highly dependent of I/O performances. We simply remark
that conceptual scaling lengths more than 5 minutes for the dataset adult-100%
resulting in a text-file of more than 10 giga-bytes (in the standard format of
itemset mining algorithms: each line corresponds to an object described by the
indexes of its attributes separated by a space).</p>
      <p>To conclude, even with a simple Java implementation of CloseByOne
(computing both extents and intents of pattern concepts), we gave here a proof of
concept that pattern structures reveal themselves as a good trade off to
overcome scaling difficulties, i.e. for computing the set of concepts whose lattice is
equivalent to the one obtained after conceptual scaling.
6</p>
    </sec>
    <sec id="sec-4">
      <title>Conclusions</title>
      <p>We have presented a method to compute the characterization of a set of
functional dependencies that hold in a set of many-valued data, based on formal
concept analysis plus pattern structures. From this characterization, it is simply
a matter of applying well-known algorithms to compute the minimal set of
dependencies that imply the whose set (otherwise known as the Duquenne-Guigues
basis). There was already methods to compute the characterization of those
dependencies using FCA: One possibility is using conceptual scaling, which is the
classical method to deal with many-valued data in FCA. This paper proposes to
use pattern structures, because they have already been used successfully to deal
with many-valued data ([14]).</p>
      <p>The empirical results compare an algorithm based on scaling versus another
based on pattern structures, and show that scaling is faster for small datasets,
whereas pattern structures perform better for large datasets, precisely where the
scaling-based algorithm is not able to compute the output. This indicates that
this new paradigm is more scalable in terms of time and memory. Since datasets
tend to become larges and contain more attributes, this scalability may be a
much important feature than speed in small datasets.</p>
      <p>The results in this paper present a new paradigm for computing a
characterization of functional dependencies that outperforms algorithms based on
the classical conceptual scaling, which shows the interest of pattern structures
for dealing with many-valued data within FCA. We think that the results that
have been presented open the possibility to adapt this pattern structures based
framework to other kinds of dependencies, namely, multi-valued dependencies
and similar constraints that may be found in different fields.
7</p>
    </sec>
    <sec id="sec-5">
      <title>Acknowledgements</title>
      <p>This research work has been supported by the Spanish Ministry of Education
and Science (project TIN2008-06582-C03-01), EU PASCAL2 Network of
Excellence, and by the Generalitat de Catalunya (2009-SGR-980 and 2009-SGR-1428)
and AGAUR (grant 2010PIV00057) that allowed professor Napoli to visit the
Universitat Polit`ecnica de Catalunya.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <given-names>S.</given-names>
            <surname>Abiteboul</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Hull</surname>
          </string-name>
          , and
          <string-name>
            <given-names>V.</given-names>
            <surname>Vianu</surname>
          </string-name>
          . Foundations of Databases,
          <year>1995</year>
          . AddisonWesley.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <given-names>J.</given-names>
            <surname>Baixeries</surname>
          </string-name>
          and
          <string-name>
            <given-names>J. L.</given-names>
            <surname>Balca</surname>
          </string-name>
          <article-title>´zar. Discrete Deterministic Data Mining as Knowledge Compilation</article-title>
          .
          <source>Proceedings of Workshop on Discrete Mathematics and Data Mining in SIAM International Conference on Data Mining</source>
          ,
          <year>2003</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <given-names>J.</given-names>
            <surname>Baixeries</surname>
          </string-name>
          .
          <article-title>A Formal Concept Analysis framework to model functional dependencies</article-title>
          .
          <source>Mathematical Methods for Learning</source>
          ,
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <given-names>J.</given-names>
            <surname>Baixeries</surname>
          </string-name>
          .
          <article-title>Lattice Characterization of Armstrong and Symmetric Dependencies</article-title>
          .
          <source>Ph. Thesis</source>
          ,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <given-names>C.</given-names>
            <surname>Beeri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Fagin</surname>
          </string-name>
          ,and
          <string-name>
            <given-names>J. H.</given-names>
            <surname>Howard</surname>
          </string-name>
          .
          <article-title>A Complete Axiomatization for Functional and Multivalued Dependencies in Database Relations</article-title>
          .
          <source>Proceedings of the 1977 ACM SIGMOD International Conference on Management of Data</source>
          , Toronto, Canada,
          <source>August 3-5</source>
          ,
          <year>1977</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <given-names>N.</given-names>
            <surname>Caspard</surname>
          </string-name>
          and
          <string-name>
            <given-names>B.</given-names>
            <surname>Monjardet</surname>
          </string-name>
          .
          <source>The Lattices of Closure Systems, Closure Operators, and Implicational Systems on a Finite Set: a Survey</source>
          .
          <source>Proceedings of the 1998 Conference on Ordinal and Symbolic Data Analysis (OSDA-98)</source>
          .
          <source>Discrete Applied Mathematics</source>
          ,
          <year>2003</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <given-names>A.</given-names>
            <surname>Day</surname>
          </string-name>
          .
          <article-title>The Lattice Theory of Functional Dependencies and Normal Decompositions</article-title>
          .
          <source>International Journal of Algebra and Computation</source>
          Vol.
          <volume>2</volume>
          , No.
          <volume>4</volume>
          <fpage>409</fpage>
          -
          <lpage>431</lpage>
          .
          <year>1992</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <given-names>J.</given-names>
            <surname>Demetrovics</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Hencsey</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Libkin</surname>
          </string-name>
          and
          <string-name>
            <surname>I. Muchnik.</surname>
          </string-name>
          <article-title>Normal Form Relation Schemes: a New Characterization</article-title>
          .
          <source>Acta Cybernetica</source>
          ,
          <year>1992</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <given-names>J.</given-names>
            <surname>Demetrovics</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Libkin</surname>
          </string-name>
          and
          <string-name>
            <surname>I. Muchnik. Functional</surname>
          </string-name>
          <article-title>Dependencies in Relational Databases: a Lattice Point of View</article-title>
          .
          <source>Discrete Applied Mathematics</source>
          ,
          <year>1992</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <given-names>V.</given-names>
            <surname>Duquenne</surname>
          </string-name>
          and
          <string-name>
            <given-names>J.L.</given-names>
            <surname>Guigues</surname>
          </string-name>
          .
          <article-title>Familles Minimales d'Implications Informatives Resultant d'un Tableau de Don´ees Binaires</article-title>
          .
          <source>Mathematics and Social Sciences</source>
          ,
          <year>1986</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <given-names>B.</given-names>
            <surname>Ganter</surname>
          </string-name>
          and
          <string-name>
            <given-names>S. O.</given-names>
            <surname>Kuznetsov</surname>
          </string-name>
          .
          <article-title>Pattern structures and their projections</article-title>
          . In: Delugach,
          <string-name>
            <given-names>H.</given-names>
            ,
            <surname>Stumme</surname>
          </string-name>
          ,
          <string-name>
            <surname>G</surname>
          </string-name>
          . (eds.)
          <article-title>Conceptual Structures: Broadening the Base</article-title>
          ,
          <source>Proceedings of the 9th International Conference on Conceptual Structures (ICCS</source>
          <year>2001</year>
          ). pp.
          <fpage>129</fpage>
          -
          <lpage>142</lpage>
          . LNCS 2120, Springer (
          <year>2001</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <given-names>B.</given-names>
            <surname>Ganter</surname>
          </string-name>
          and
          <string-name>
            <given-names>R.</given-names>
            <surname>Wille</surname>
          </string-name>
          <article-title>: Formal Concept Analysis</article-title>
          . Springer, Berlin (
          <year>1999</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13. G. Gra¨tzer. General Lattice Theory. Academic Press,
          <year>1978</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>M. Kaytoue</surname>
            , Kuznetsov and
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Napoli</surname>
          </string-name>
          .
          <article-title>Revisiting Numerical Pattern Mining with Formal Concept Analysis</article-title>
          .
          <source>IJCAI 2011, Proceedings of the 22nd International Joint Conference on Artificial Intelligence</source>
          , Barcelona, Catalonia.
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <given-names>S. O.</given-names>
            <surname>Kuznetsov</surname>
          </string-name>
          and
          <string-name>
            <surname>S. Obiedkov:</surname>
          </string-name>
          <article-title>Comparing performance of algorithms for generating concept lattices</article-title>
          .
          <source>Journal of Experimental &amp; Theoretical Artificial Intelligence</source>
          <volume>14</volume>
          (
          <issue>2</issue>
          /3),
          <fpage>189</fpage>
          -
          <lpage>216</lpage>
          (
          <year>2002</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <given-names>D.</given-names>
            <surname>Simovici</surname>
          </string-name>
          and
          <string-name>
            <given-names>R.</given-names>
            <surname>Tenney</surname>
          </string-name>
          .
          <source>Relational Database Systems</source>
          . Academic Press,
          <year>1995</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17. T. Uno,
          <string-name>
            <given-names>M.</given-names>
            <surname>Kiyomi</surname>
          </string-name>
          , and H. Arimura:
          <article-title>Lcm ver. 2: Efficient mining algorithms for frequent/closed/maximal itemsets</article-title>
          .
          <source>IEEE ICDM Workshop on Frequent Itemset Mining Implementations</source>
          ,
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          18.
          <string-name>
            <surname>J. D. Ullman</surname>
          </string-name>
          .
          <article-title>Principles of Database Systems</article-title>
          . Computer Science Press,
          <year>1982</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>