<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Comparing Performance of Algorithms for Generating Concept Lattices</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Sergei</forename><forename type="middle">O</forename><surname>Kuznetsov</surname></persName>
						</author>
						<author>
							<persName><forename type="first">Sergei</forename><forename type="middle">A</forename><surname>Ob''edkov</surname></persName>
							<affiliation key="aff1">
								<orgName type="institution">Russian State University for the Humanities</orgName>
								<address>
									<settlement>Moscow</settlement>
									<country key="RU">Russia</country>
								</address>
							</affiliation>
						</author>
						<author>
							<affiliation key="aff0">
								<orgName type="department">Institute for Scientific and Technical Information (VINITI)</orgName>
								<address>
									<settlement>Moscow</settlement>
									<country>Russia, Russia</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Comparing Performance of Algorithms for Generating Concept Lattices</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">C2B6CAAD78093796F29C09FB88A8BF1E</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T00:25+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Several algorithms that generate the set of all formal concepts and diagram graphs of concept lattices are considered. Some modifications of wellknown algorithms are proposed. Algorithmic complexity of the algorithms is studied both theoretically (in the worst case) and experimentally. Conditions of preferable use of some algorithms are given in terms of density/sparseness of underlying formal contexts. Principles of comparing practical performance of algorithms are discussed.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">Introduction</head><p>Concept (Galois) lattices proved to be a useful tool in many applied domains: machine learning, data mining and knowledge discovery, information retrieval, etc. <ref type="bibr" target="#b2">[3,</ref><ref type="bibr" target="#b5">6,</ref><ref type="bibr" target="#b8">9,</ref><ref type="bibr" target="#b21">22,</ref><ref type="bibr" target="#b22">23]</ref>. The problem of generating the set of all concepts and the concept lattice of a formal context is extensively studied in the literature <ref type="bibr">[2-5, 7, 11, 13, 16, 18-22]</ref>. It is known that the number of concepts can be exponential in the size of the input context (e.g., when the lattice is a Boolean one) and the problem of determining this number is #P-complete <ref type="bibr" target="#b14">[15]</ref>.</p><p>Therefore, from the standpoint of the worst-case complexity, an algorithm generating all concepts and/or a concept lattice can be considered optimal if it generates the lattice with polynomial time delay and space linear in the number of all concepts (modulo some factor polynomial in the input size). First, we give some standard definitions of Formal Concept Analysis (FCA) <ref type="bibr" target="#b7">[8]</ref>.</p><p>A formal context is a triple of sets (G, M, I), where G is called a set of objects, M is called a set of attributes, and I ⊆ G × M. For A ⊆ G and B ⊆ M: A' = {m ∈ M | ∀g∈A (gIm)}, B' = {g ∈ G | ∀m∈B (gIm)}. A formal concept of a formal context (G, M, I) is a pair (A, B), where A ⊆ G, B ⊆ M, A' = B, and B' = A. The set A is called the extent, and the set B is called the intent of the concept (A, B). For a context (G, M, I), a concept X = (A, B) is less general than or equal to a concept Y = (C, D) (or X ≤ Y) if A ⊆ C or, equivalently, D ⊆ B. For two concepts X and Y such that X ≤ Y and there is no concept Z with Z ≠ X, Z ≠ Y, X ≤ Z ≤ Y, the concept X is called a lower neighbor of Y, and Y is called an upper neighbor of X. This relationship is denoted by X Y. We call the (directed) graph of this relation a diagram graph. A plane (not necessarily a pla-nar) embedding of a diagram graph where a concept has larger vertical coordinate than that of any of its lower neighbors is called a line (Hasse) diagram. The problem of drawing line diagrams <ref type="bibr" target="#b7">[8]</ref> is not discussed here.</p><p>The problem of comparing performance of algorithms for constructing concept lattices and their diagram graphs is a challenging and multifaceted one. The first comparative study of several algorithms constructing the concept set and diagram graphs can be found in <ref type="bibr" target="#b12">[13]</ref>. However, the formulation of the algorithms is not always correct, and the description of the results of the experimental tests lacks any information about data used for tests. The fact that the choice of the algorithm should be dependent on the input data is not accounted for. Besides, only one of the algorithms considered in <ref type="bibr" target="#b12">[13]</ref>, namely that of Bordat <ref type="bibr" target="#b1">[2]</ref>, constructs the diagram graph; thus, it is hard to compare its time complexity with that of the other algorithms.</p><p>A later review with more algorithms and more information on experimental data can be found in <ref type="bibr" target="#b10">[11]</ref>. Only algorithms generating diagram graphs are considered. The algorithms that were not originally designed for this purpose are extended by the authors to generate diagram graphs. Unfortunately, such extensions are not always effective: for example, the time complexity of the version of the Ganter algorithm (called Ganter-Allaoui) dramatically increases with the growth of the context size. This drawback can be cancelled by the efficient use of binary search in the list produced by the original Ganter algorithm. Tests were conducted only for contexts with small number of attributes per object as compared to the number of all attributes. Our experiments (we consider some other algorithms, e.g., that of Nourine <ref type="bibr" target="#b20">[21]</ref>) also show that the algorithm proposed in <ref type="bibr" target="#b10">[11]</ref> works faster on such contexts than the others do. However, in other situations not covered in <ref type="bibr" target="#b10">[11]</ref> this algorithm is far behind some other algorithms.</p><p>The rest of the paper is organized as follows. In Section 2, we discuss the principles of comparing efficiency of algorithms and make an attempt at their classification. In Section 3, we give a short review of the algorithms and analyze their worst-case complexity. In Section 4, we present the results of experimental comparison.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">On Principles of Comparison</head><p>In our study, we considered both theoretical (worst-case) and experimental complexity of algorithms. As for the worst-case upper bounds, the algorithms with complexity linear in the number of concepts (modulo a factor polynomial of the input size) are better than those with complexity quadratic in the number of concepts; and the former group can be subdivided into smaller groups according to the form of the factor polynomial of input. According to this criterion, the present champion is the algorithm by Nourine <ref type="bibr" target="#b20">[21]</ref>. On the other hand, "dense" contexts, which realize the worst case by bringing about exponential number of concepts, may occur not often in practice.</p><p>Starting a comparison of algorithms "in practice", we face a bunch of problems. First, algorithms, as described by their authors, often allow for different interpretation of crucial details, such as the test of uniqueness of a generated concept. Second, authors seldom describe exactly data structures and their realizations. Third, algorithms behave differently on different databases (contexts). Sometimes authors compare their algorithms with other on specific data sets. We would like to propose the community to reach a consensus w.r.t. databases to be used as testbeds. Our idea is to consider two types of testbeds. On the one hand, some "classical" (well-recognized in data analysis community) databases should be used, with clearly defined scalings if they are many-valued. On the other hand, we propose to use "randomly generated contexts". The main parameters of a context K = (G, M, I) seem here to be the (relative to |M|) number of objects |G| and the (relative to |G|) number of attributes, the (relative, i.e. compared to |G||M|) size of the relation I, average number of attributes per object intent (resp., average number of objects per attribute extent). The community should specify particular type(s) of random context generator(s) that can be tuned by the choice of above (or some other) parameters.</p><p>Another major difficulty resides in the choice of a programming language and platform, which strongly affects the performance. A possible way of avoiding this difficulty could be comparing not the time but number of specified operations (intersections, unions, closures, etc.) from a certain library, but here one encounters the difficulty of weighting these operations in order to get the overall performance. Much simpler would be comparing algorithms using a single platform.</p><p>In this article, we compare performance of several algorithms for clearly specified random data sets (contexts). As for ambiguities in original pseudo-code formulations of the algorithms, we tried to find most efficient realizations for them. Of course, this does not guarantee that a more efficient realization cannot be found.</p><p>In most cases, it was possible to improve the original versions of the algorithms. Since only few known algorithms generating the concept set construct also the diagram graph, we attempted to modify some algorithms making them able to construct diagram graphs. The versions of algorithms used for comparison are presented in <ref type="bibr" target="#b16">[17]</ref>.</p><p>As mentioned above, data structures that realize concept sets and diagram graphs of concept lattices are of great importance. Since their sizes can be exponential w.r.t. the input size, some their natural representations are not polynomially equivalent, as it is in the case of graphs. For example, the size of the incidence matrix of a diagram graph is quadratic w.r.t. the size of the incidence list of the diagram graph and thus cannot be reduced to the latter in time polynomial w.r.t. the input. Moreover, some important operations, such as finding a concept, are performed for some representations (spanning trees <ref type="bibr" target="#b1">[2,</ref><ref type="bibr" target="#b9">10]</ref>, ordered lists <ref type="bibr" target="#b6">[7]</ref>, CbO trees <ref type="bibr" target="#b15">[16]</ref>, 2-3 trees, see <ref type="bibr" target="#b0">[1]</ref> for the definition) in polynomial time, but for some other representations (unordered lists) they can be performed only in exponential time. A representation of a concept lattice can be considered reasonable if its size cannot be exponentially compressed w.r.t. the input and allows the search for a particular concept in time polynomial in the input.</p><p>Table <ref type="table">1</ref> presents an attempt at classification of algorithms. Note that this classification refers to our versions of the algorithms described in <ref type="bibr" target="#b16">[17]</ref> rather than to original versions (except for Titanic <ref type="bibr" target="#b21">[22]</ref>, which we have not implemented; it is included into classification, because it realizes an approach completely different from that of the other algorithms). Here, we do not address techniques for building diagram graphs; the attributes of the context in Table <ref type="table">1</ref> describe only construction of the concept set.</p><p>All the algorithms can be divided into two categories: incremental algorithms <ref type="bibr" target="#b2">[3,</ref><ref type="bibr" target="#b4">5,</ref><ref type="bibr" target="#b10">11,</ref><ref type="bibr" target="#b19">20]</ref>, which, at the ith step, produce the concept set or the diagram graph for i first objects of the context, and batch ones, which build the concept set and its diagram graph for the whole context from scratch <ref type="bibr" target="#b1">[2,</ref><ref type="bibr" target="#b3">4,</ref><ref type="bibr" target="#b6">7,</ref><ref type="bibr" target="#b15">16,</ref><ref type="bibr" target="#b17">18,</ref><ref type="bibr" target="#b23">24]</ref>. Besides, any batch algorithm typically adheres to one of the two strategies: top-down (from the maximal extent to the minimal one) or bottom-up (from the minimal extent to the maximal one). However, it is always possible to reverse the strategy of the algorithm by considering attributes instead of objects and vice versa; therefore, we choose not to include this property into the classification.</p><p>Generation of the concept set presents two main problems: (1) how to generate all concepts; (2) how to avoid repetitive generation of the same concept or, at least, to determine whether a concept is generated for the first time. There are several ways to generate a new intent. Some algorithms (in particular, incremental ones) intersect a generated intent with some object intent. Other algorithms compute an intent explicitly intersecting all objects of the corresponding extent. There are algorithms that, starting from object intents, create new intents by intersecting already obtained intents. Lastly, the algorithm from <ref type="bibr" target="#b21">[22]</ref> does not use the intersection operation to generate intents. It forms new intents by adding attributes to those already generated and tests some condition on supports of attribute sets (a support of an attribute set is the number of objects whose intents contain all attributes from this set) to realize whether an attribute set is an intent.</p><p>In Table <ref type="table">1</ref>, attributes m2-m6 correspond to techniques used to avoid repetitive generation of the same concept. This can be done by maintaining specific data structures. For example, the Nourine algorithm constructs a tree of concepts and searches in this tree for every newly generated concept. Note that other algorithms (e.g., Bordat and Close by One) also may use trees for storing concepts, which allows efficient search for a concept when the diagram graph is to be constructed. However, these algorithms use other techniques for identifying the first generation of a concept, and, therefore, they do not have the m5 attribute in the context from Table <ref type="table">1</ref>. Some algorithms divide the set of all concepts into disjoint sets, which allows narrowing down the search space. For example, the Chein algorithm stores concepts in layers, each layer corresponding to some step of the algorithm. The original version of this algorithm looks through the current layer each time a new concept is generated. The version we used for comparison does not involve search to detect duplicate concepts; instead, it employs a canonicity test based on the lexicographical order (similar to that of Ganter), which made it possible to greatly improve the efficiency of the algorithm. We use layers only for generation of concepts: a new intent is produced as the intersection of two intents from the same layer. (In our version of the algorithm, layers are much smaller than those in <ref type="bibr" target="#b3">[4]</ref>; see <ref type="bibr" target="#b16">[17]</ref> for details.) The Godin algorithm uses a hash function (the cardinality of intents), which makes it possible to distribute concepts among "buckets" and to reduce the search. Several algorithms (Ganter, Close by One) generate concepts in the lexicographical order of their extents assuming that there is a linear order on the set of objects. At each step of the algorithm, there is a current object. The generation of a concept is considered canonical if its extent contains no object preceding the current object. Our implementation of the Bordat algorithm uses an attribute cache: the uniqueness of a concept is tested by intersecting its intent with the content of the cache (for more details, see <ref type="bibr" target="#b16">[17]</ref>).</p><p>Table <ref type="table">1</ref>. Properties of algorithms constructing concept lattices: m1-incremental; m2-uses canonicity based on the lexical order; m3-divides the set of concepts into several parts; m4uses hash function; m5-maintains an auxiliary tree structure; m6-uses attribute cache; m7computes intents by subsequently computing intersections of object intents (i.e., {g}' ∩ {h}'); m8-computes intersections of already generated intents; m9-computes intersections of nonobject intents and object intents; m10-uses supports of attribute sets. m1 m2 m3 m4 m5 m6 m7 m8 m9 m10 Bordat X X Ganter X X In many cases, we attempted to improve the efficiency of the original algorithms. Only some of the original versions of the algorithms construct the diagram graph <ref type="bibr" target="#b1">[2,</ref><ref type="bibr" target="#b10">11,</ref><ref type="bibr" target="#b17">18,</ref><ref type="bibr" target="#b20">21]</ref>; it turned out that the other algorithms could be extended to construct the diagram graph within the same worst-case time complexity bounds.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Close by One</head><formula xml:id="formula_0">X X Lindig X X Chein X X X Nourine X X X Norris X X X Godin X X X X Dowling X X X Titanic X</formula><p>In the next section, we will discuss worst-case complexity bounds of the considered algorithms. Since the output size can be exponential in the input, it is reasonable to estimate complexity of the algorithms not only in terms of input and output sizes, but also in terms of (cumulative) delay. Recall that an algorithm for listing a family of combinatorial structures is said to have polynomial delay <ref type="bibr" target="#b13">[14]</ref> if it executes at most polynomially many computation steps before either outputting each next structure or halting. An algorithm is said to have a cumulative delay d <ref type="bibr" target="#b11">[12]</ref> if it is the case that at any point of time in any execution of the algorithm with any input p the total number of computation steps that have been executed is at most d(p) plus the product of d(p) and the number of structures that have been output so far. If d(p) can be bounded by a polynomial of p, the algorithm is said to have a polynomial cumulative delay.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">Algorithms: a Short Survey</head><p>Some top-down algorithms have been proposed in <ref type="bibr" target="#b1">[2]</ref> and <ref type="bibr" target="#b23">[24]</ref>. The algorithm MItree from <ref type="bibr" target="#b23">[24]</ref> generates the concept set, but does not build the diagram graph. In MItree, every new concept is searched for in the set of all concepts generated so far. The algorithm of Bordat <ref type="bibr" target="#b1">[2]</ref> uses a tree (a "trie," cf. <ref type="bibr" target="#b0">[1]</ref>) for fast storing and retrieval of concepts. Our version of this algorithm uses a technique that requires O(|M|) time to realize whether a concept is generated for the first time without any search The Close by One (CbO) algorithm uses a similar notion of canonicity, a similar method for selecting subsets, and an intermediate structure that helps to compute closures more efficiently using the generated concepts. Its time complexity is O(|G| 2 |M||L|), and its polynomial delay is O(|G| 3 |M|).</p><p>The idea of a bottom-up algorithm in <ref type="bibr" target="#b17">[18]</ref> is to generate the bottom concept and then, for each concept that is generated for the first time, generate all its upper neighbors. Lindig uses a tree of concepts that allows one to check whether some concept was generated earlier. The time complexity of the algorithm is O(|G| 2 |M||L|). Its polynomial delay is O(|G| 2 |M|).</p><p>The Chein <ref type="bibr" target="#b3">[4]</ref> algorithm represents the objects by extent-intent pairs and generates each new concept intent as the intersection of intents of two existent concepts. At every iteration step of the Chein algorithm, a new layer of concepts is created by intersecting pairs of concept intents from the current layer and the new intent is searched for in the new layer. We introduced several modifications that made it possible to greatly improve the performance of the algorithm. The time complexity of the modified algorithm is O(|G| 3 </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>|M||L|). The algorithm has polynomial delay O(|G| 3 |M|).</head><p>Due to their incremental nature, the algorithms considered below do not have polynomial delay. Nevertheless, they all have cumulative polynomial delay.</p><p>Nourine proposes an O((|G| + |M|)|G||L|) algorithm for the construction of the lattice using a lexicographic tree <ref type="bibr" target="#b20">[21]</ref> with edges labeled by attributes and nodes labeled by concepts. Note that this algorithm is only half-incremental. First, this algorithm incrementally constructs the concept set outputting a tree of concepts; next, it uses this tree to construct the diagram graph.</p><p>The algorithm proposed by Norris <ref type="bibr" target="#b19">[20]</ref> is essentially an incremental version of the CbO algorithm. The original version of the Norris algorithm from <ref type="bibr" target="#b19">[20]</ref> does not construct the diagram graph. The time complexity of the algorithm is O(|G| 2 |M||L|).</p><p>The algorithm proposed by Godin <ref type="bibr" target="#b10">[11]</ref> has the worst-case time complexity quadratic in the number of concepts. This algorithm is based on the use of an efficiently computable hash function f (which is actually the cardinality of an intent) defined on the set of concepts.</p><p>Dowling proposed <ref type="bibr" target="#b4">[5]</ref> an incremental algorithm for computing knowledge spaces. A dual formulation of the algorithm allows generation of the concept set. Despite the fact that the theoretical worst-case complexity of the algorithm is O(|M||G| 2 |L|), the constant in this upper bound seems to be too large and in practice the algorithm performs worse than other algorithms.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">Results of Experimental Tests</head><p>The algorithms were implemented in C++ in the Microsoft Visual C++ environment. The tests were run on a Pentium II-300 computer, 256 MB RAM. Here, we present a number of charts that show how the execution time of the algorithms depends on various parameters. More charts can be found in <ref type="bibr" target="#b16">[17]</ref>. For tests, we used randomly generated data. Contexts were generated based on three parameters: |G|, |M|, and the number of attributes per object (denoted below as |g'|; all objects of the same context had equal numbers of attributes). Given |g'|, every row of the context (i.e., every object intent) was generated by successively calling the rand function from the standard C library to obtain the numbers of attributes constituting the object intent, which lead to uniform distribution. The Godin algorithm (and GodinEx, which is the version of the Godin algorithm using the cardinality of extents for the hash function) is a good choice in the case of a sparse context. However, when contexts become denser, its performance decreases dramatically. The Bordat algorithm seems most suitable for large contexts, especially if it is necessary to build the diagram graph. When |G| is small, the Bordat algorithm runs several times slower than other algorithms, but, as |G| grows, the difference between Bordat and other algorithms becomes smaller, and, in many cases, Bordat finally turns out to be the leader. For large and dense contexts, the fastest algorithms are bottom-up canonicity-based algorithms (Norris, CbO).   It should be noted that the Nourine algorithm featuring the smallest time complexity, has not been the fastest algorithm: even when diagonal contexts of the form (G, G,≠) (which corresponds to the worst case) are processed, its performance was inferior to the Norris algorithm. Probably, this can be accounted to the fact that we represent attribute sets by bit strings, which allows very efficient implementation of settheoretical operations (32 attributes per one processor cycle); whereas searching in the Nourine-style lexicographic tree, one still should individually consider each attribute labeling edges.  </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>|G| Time in milliseconds</head><note type="other">CbO Ganter Norris Bordat Godin Lindig Nourine</note></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>|G| Time in milliseconds</head><note type="other">Chein CbO Ganter Norris Bordat GodinEx Godin Dowling</note></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>|G| Time in milliseconds</head><note type="other">CbO Ganter Norris Bordat Godin Lindig Nourine</note></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5">Conclusion</head><p>In this work, we attempted to compare, both theoretically and experimentally, some well-known algorithms for constructing concept lattices. We discussed principles of experimental comparison.</p><p>A new algorithm was proposed in <ref type="bibr" target="#b21">[22]</ref> quite recently, so we could not include it in our experiments. Its worst time complexity is not better than that of the algorithms described above, but the authors report on its good practical performance for databases with very large number of objects. Comparing the performance of this algorithm with those considered above and testing the algorithms on large databases, including "classical" ones, will be the subject of the further work. We can also mention works <ref type="bibr" target="#b2">[3]</ref>, <ref type="bibr" target="#b18">[19]</ref> where similar algorithms were applied for machine learning and data analysis, e.g., in <ref type="bibr" target="#b18">[19]</ref> a Bordat-type algorithm was used.</p><p>The choice of an algorithm for construction of the concept lattice should be based on the properties of input data. Recommendations based on our experiments are as follows: the Godin algorithm should be used for small and sparse contexts; for dense contexts, the algorithms based on the canonicity test, linear in the number of input objects, such as Close by One and Norris, should be used. Bordat performs well on contexts of average density, especially, when the diagram graph is to be constructed. Of course, these recommendations should not be considered as the final judgement. By this work, we would like rather to provoke further interest in well-substantiated comparison of algorithms that generate concept lattices.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Fig. 1 .</head><label>1</label><figDesc>Fig. 1. The line diagram of algorithms</figDesc><graphic coords="5,124.88,426.66,345.59,199.91" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head></head><label></label><figDesc>. The time complexity of Bordat is O(|G||M| 2 |L|), where |L| is the size of the concept lattice. Moreover, this algorithm has a polynomial delay O(|G||M| 2 ). The algorithm proposed by Ganter computes closures for only some of subsets of G and uses an efficient canonicity test, which does not address the list of generated concepts. It produces the set of all concepts in time O(|G| 2 |M||L|) and has polynomial delay O(|G| 2 |M|).</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Fig. 2 .Fig. 3 .</head><label>23</label><figDesc>Fig. 2. Concept set: |M| = 100; |g'| = 4.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Fig. 4 .</head><label>4</label><figDesc>Fig. 4. Concept set: |M| = 100; |g'| = 25.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_5"><head>Fig. 5 .</head><label>5</label><figDesc>Fig. 5. Diagram graph: |M| = 100; |g'| = 25.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_6"><head>Fig. 6 .</head><label>6</label><figDesc>Fig. 6. Concept set: |M| = 100; |g'| = 50.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_8"><head>Fig. 7 .Fig. 8 .Fig. 9 .</head><label>789</label><figDesc>Fig. 7. Diagram graph: |M| = 100; |g'| = 50.</figDesc></figure>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<monogr>
		<title level="m" type="main">Data Structures and Algorithms</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">V</forename><surname>Aho</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">E</forename><surname>Hopcroft</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">D</forename><surname>Ullmann</surname></persName>
		</author>
		<imprint>
			<date type="published" when="1983">1983</date>
			<publisher>Addison-Wesley</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Calcul pratique du treillis de Galois d&apos;une correspondance</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">P</forename><surname>Bordat</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Math. Sci. Hum</title>
		<imprint>
			<biblScope unit="issue">96</biblScope>
			<biblScope unit="page" from="31" to="47" />
			<date type="published" when="1986">1986</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">A Lattice Conceptual Clustering System and Its Application to Browsing Retrieval</title>
		<author>
			<persName><forename type="first">C</forename><surname>Carpineto</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Romano</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Machine Learning</title>
				<imprint>
			<date type="published" when="1996">1996</date>
			<biblScope unit="page" from="95" to="122" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Algorithme de recherche des sous-matrices premières d&apos;une matrice</title>
		<author>
			<persName><forename type="first">M</forename><surname>Chein</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Bull. Math. Soc. Sci. Math. R.S. Roumanie</title>
		<imprint>
			<biblScope unit="issue">13</biblScope>
			<biblScope unit="page" from="21" to="25" />
			<date type="published" when="1969">1969</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">On the Irredundant Generation of Knowledge Spaces</title>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">E</forename><surname>Dowling</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">J. Math. Psych</title>
		<imprint>
			<biblScope unit="volume">37</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="49" to="62" />
			<date type="published" when="1993">1993</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Plausible Reasoning in Systems of JSM Type</title>
		<author>
			<persName><forename type="first">V</forename><forename type="middle">K</forename><surname>Finn</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Itogi Nauki i Tekhniki, Ser. Informatika</title>
		<imprint>
			<biblScope unit="volume">15</biblScope>
			<biblScope unit="page" from="54" to="101" />
			<date type="published" when="1991">1991</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Two Basic Algorithms in Concept Analysis</title>
		<author>
			<persName><forename type="first">B</forename><surname>Ganter</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="s">FB4-Preprint</title>
		<imprint>
			<biblScope unit="volume">831</biblScope>
			<date type="published" when="1984">1984</date>
			<publisher>TH Darmstadt</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Formal Concept Analysis</title>
		<author>
			<persName><forename type="first">B</forename><surname>Ganter</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Wille</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Mathematical Foundations</title>
				<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="1999">1999</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Formalizing Hypotheses with Concepts</title>
		<author>
			<persName><forename type="first">B</forename><surname>Ganter</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Kuznetsov</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proc. of the 8 th International Conference on Conceptual Structures, ICCS 2000</title>
		<title level="s">Lecture Notes in Artificial Intelligence</title>
		<meeting>of the 8 th International Conference on Conceptual Structures, ICCS 2000</meeting>
		<imprint>
			<biblScope unit="volume">1867</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Finding All Closed Sets: A General Approach</title>
		<author>
			<persName><forename type="first">B</forename><surname>Ganter</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Reuter</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Order</title>
		<imprint>
			<biblScope unit="volume">8</biblScope>
			<biblScope unit="page" from="283" to="290" />
			<date type="published" when="1991">1991</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Incremental Concept Formation Algorithms Based on Galois Lattices</title>
		<author>
			<persName><forename type="first">R</forename><surname>Godin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Missaoui</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Alaoui</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Computation Intelligence</title>
		<imprint>
			<date type="published" when="1995">1995</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<monogr>
		<title level="m" type="main">Efficient Algorithms for Listing Combinatorial Structures</title>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">A</forename><surname>Goldberg</surname></persName>
		</author>
		<imprint>
			<date type="published" when="1993">1993</date>
			<publisher>Cambridge University Press</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Construction du treillis de Galois d&apos;une relation binaire</title>
		<author>
			<persName><forename type="first">A</forename><surname>Guénoche</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Math. Inf. Sci. Hum</title>
		<imprint>
			<biblScope unit="issue">109</biblScope>
			<biblScope unit="page" from="41" to="53" />
			<date type="published" when="1990">1990</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">On Generating all Maximal Independent Sets</title>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">S</forename><surname>Johnson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Yannakakis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">H</forename><surname>Papadimitriou</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Inf. Proc. Let</title>
		<imprint>
			<biblScope unit="volume">27</biblScope>
			<biblScope unit="page" from="119" to="123" />
			<date type="published" when="1988">1988</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Interpretation on Graphs and Complexity Characteristics of a Search for Specific Patterns</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">O</forename><surname>Kuznetsov</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Automatic Documentation and Mathematical Linguistics</title>
		<imprint>
			<biblScope unit="volume">24</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="37" to="45" />
			<date type="published" when="1989">1989</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">A Fast Algorithm for Computing All Intersections of Objects in a Finite Semi-lattice</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">O</forename><surname>Kuznetsov</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Automatic Documentation and Mathematical Linguistics</title>
		<imprint>
			<biblScope unit="volume">27</biblScope>
			<biblScope unit="issue">5</biblScope>
			<biblScope unit="page" from="11" to="21" />
			<date type="published" when="1993">1993</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<monogr>
		<title level="m" type="main">Algorithm for the Construction of the Set of All Concepts and Their Line Diagram</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">O</forename><surname>Kuznetsov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">A</forename><surname>Ob''edkov</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2000-06">June 2000</date>
		</imprint>
		<respStmt>
			<orgName>TU-Dresden</orgName>
		</respStmt>
	</monogr>
	<note type="report_type">Preprint MATH-Al-05</note>
</biblStruct>

<biblStruct xml:id="b17">
	<monogr>
		<author>
			<persName><forename type="first">C</forename><surname>Lindig</surname></persName>
		</author>
		<title level="m">Algorithmen zur Begriffsanalyse und ihre Anwendung bei Softwarebibliotheken</title>
				<imprint>
			<date type="published" when="1999">1999</date>
		</imprint>
		<respStmt>
			<orgName>Techn. Univ. Braunschweig</orgName>
		</respStmt>
	</monogr>
	<note type="report_type">Dr.-Ing.) Dissertation</note>
</biblStruct>

<biblStruct xml:id="b18">
	<monogr>
		<title level="m" type="main">Using Lattice-Based Framework As a Tool for Feature Extraction, in Feature Extraction, Construction and Selection: A Data Mining Perspective</title>
		<author>
			<persName><forename type="first">E</forename><surname>Mephu Nguifo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Njiwoua</surname></persName>
		</author>
		<editor>, H. Liu and H. Motoda</editor>
		<imprint>
			<date type="published" when="1998">1998</date>
			<publisher>Kluwer</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">An Algorithm for Computing the Maximal Rectangles in a Binary Relation</title>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">M</forename><surname>Norris</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Revue Roumaine de Mathématiques Pures et Appliquées</title>
		<imprint>
			<biblScope unit="issue">23</biblScope>
			<biblScope unit="page" from="243" to="250" />
			<date type="published" when="1978">1978</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">A Fast Algorithm for Building Lattices</title>
		<author>
			<persName><forename type="first">L</forename><surname>Nourine</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Raynaud</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Information Processing Letters</title>
		<imprint>
			<biblScope unit="volume">71</biblScope>
			<biblScope unit="page" from="199" to="204" />
			<date type="published" when="1999">1999</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<analytic>
		<title level="a" type="main">Fast Computation of Concept Lattices Using Data Mining Techniques</title>
		<author>
			<persName><forename type="first">G</forename><surname>Stumme</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Taouil</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Bastide</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Pasquier</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Lakhal</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proc. 7 th Int. Workshop on Knowledge Representation Meets Databases</title>
				<meeting>7 th Int. Workshop on Knowledge Representation Meets Databases<address><addrLine>KRDB</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2000">2000</date>
			<biblScope unit="page" from="129" to="139" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<analytic>
		<title level="a" type="main">Conceptual Knowledge Discovery in Databases Using Formal Concept Analysis Methods</title>
		<author>
			<persName><forename type="first">G</forename><surname>Stumme</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Wille</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Wille</forename><forename type="middle">U</forename></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proc. 2 nd European Symposium on Principles of Data Mining and Knowledge Discovery (PKDD&apos;98)</title>
				<meeting>2 nd European Symposium on Principles of Data Mining and Knowledge Discovery (PKDD&apos;98)</meeting>
		<imprint/>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<analytic>
		<title level="a" type="main">Algorithms and Programs of the JSM-Method of Automatic Hypothesis Generation</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">I</forename><surname>Zabezhailo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><forename type="middle">G</forename><surname>Ivashko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">O</forename><surname>Kuznetsov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">A</forename><surname>Mikheenkova</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">P</forename><surname>Khazanovskii</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><forename type="middle">M</forename><surname>Anshakov</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Automatic Documentation and Mathematical Linguistics</title>
		<imprint>
			<biblScope unit="volume">21</biblScope>
			<biblScope unit="issue">5</biblScope>
			<biblScope unit="page" from="1" to="14" />
			<date type="published" when="1987">1987</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
