<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Enabling natural language analytics over relational data using Formal Concept Analysis</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>C. Anantaram</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Mouli Rastogi</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Mrinal Rawat</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Pratik Saini</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>TCS Research, Tata Consultancy Services Ltd</institution>
          ,
          <addr-line>Gwal Pahari, Gurgaon, India (c.anantaram; mouli.r; rawat.mrinal; pratik.saini) @tcs.com</addr-line>
        </aff>
      </contrib-group>
      <abstract>
        <p>Analysts like to pose a variety of questions over large relational databases containing data on the domain that they are analyzing. Enabling natural language question answering over such data for analysts requires mechanisms to extract exceptions in data, nd steps to transform data, detect implications in the data, and apply classi cations on the data. Motivated by this problem, we propose a semantically enriched deep learning pipeline that supports natural language question answering over relational databases and uses Formal Concept Analysis to nd exceptions, classi cation and transformation steps. Our framework is based on a set of deep learning sequence tagging networks which extracts information from the NL sentence and constructs an equivalent intermediate sketch, and then maps it into the actual tables and columns of the database. The output data of the query is converted into a lattice structure which results into the (extent,intent) tuples. These tuples are then analyzed to nd the exceptions, classi cation and transformation steps.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>Data analysts have to deal with a large number of complex and nested queries to
dig out hidden insights from the relational datasets, spread over multiple les.
Extraction of the relevant result corresponding to a given query can be easily
done through a deep learnt NLQA framework, but to detect further explanations,
facts, analysis and visualizations from queried output is a challenging problem.
This kind of data analysis over query's result can be handled by Formal Concept
Analysis, a mathematical tool that results in a concept hierarchy, makes
semantical relations during the queries, and also can nd the implications as well as
asociations in the given dataset, can unify data and knowledge and is capable
of information engineering as well as data mining. So for enabling NL analytics
over such datasets for analysts, we present in this paper, a semantically enriched
deep learning pipeline that a) enables natural language question answering over
relational databases using a set of deep learnt sequence tagging networks, and
b) carries out regularity analysis over the query results using Formal Concept
Analysis to interactively explore, discover and analyze the hidden structure in
the selected data [12] [11]. The deep learnt sequence tagging pipeline extracts
information from the NL sentence and constructs an equivalent intermediate
sketch, and then uses that sketch to formulate the actual database query on the
relevant tables and columns. Query results are used in Formal Concept Analysis
to create a lattice structure of the objects and attributes. The obtained lattice
structure is then used to nd exceptions in the data, classi cation of a new
object and also to nd the set of steps to transform the data from one structure to
another structure.
2</p>
    </sec>
    <sec id="sec-2">
      <title>Formal Concept Analysis</title>
      <p>Formal Concept Analysis provides a theoretical framework for learning
hierarchies of knowledge clusters called formal concepts. A basic notion in FCA is the
formal context. Given a set G of objects and a set M of attributes (also called
properties), a formal context consists of a triple (G, M, I) where I speci es
(Boolean) relationships between objects of G and attributes of M , i.e., I G
M .Usually, formal contexts are given under the form of a table that formalizes
these relationships. A table entry indicates whether an object has the attribute,
or not. Let I(g) = fm 2 M ; (g; m) 2 Ig be the set of attributes satis ed by
object g , and let I(m) = fg 2 G; (g; m) 2 Ig be the set of objects that satisfy
the attribute m . Given a formal context (G, M, I) . Two operators ()0 de ne
a Galois connection between the powersets (P(G), ) and (P(M), ), with A G
and B M:
and</p>
      <p>A0 = fm 2 M j8g 2 A : gImg</p>
      <p>B0 = fg 2 Gj8m 2 B : gImg
.</p>
      <p>That is to say, A0 is the set of all attributes which is satis ed all objects in A ,
whereas B0 is the set of all objects which satis es all attributes in B . A formal
concept of (G,M,I) is de ned as a pair (A,B) with A2G , B2 M , A0=B and
B0=A. A is called the extent of the formal concept (A,B), whereas B is called the
intent.The set of all formal concepts of (G, M, I) equipped with a
subconceptsuperconcept partial order is the concept lattice denoted by L. The and is
de ned as:
For A1,A2 G and B1,B2 M
(A1; B1)
(A2; B2) ()</p>
      <p>A1</p>
      <p>A2(equivalenttoB2</p>
      <p>B1)
In this case, the concept (A1; B1) is called sub-concept and the concept (A2; B2)
is called super-concept.
2.1</p>
      <p>Association and Implication Rules
Given a formal context (G,M,I) there are extracted exact rules and
approximate rules (rules with statistical values, for example, support and con dence).
These rules express in an alternative way the underlying knowledge. These rules
are signi cant as they expresses the underlying knowledge of interaction among
attributes.The exact rules are classi ed as implication rules while the
approximation rules are classi ed as association rules.</p>
      <p>De nition Given a formal context whose attributes set is M. An implication is
an expression S =) T, where S,T M. An implication S =) T, extracted from
a formal context, or respective concept lattice, have to be such that S0 T0. In
other words: every object which has the attributes of S, also have the attributes
of T. If X is a set of attributes, then X respects an implication S =) T i S 6
X or T 6 X. An implication S =) T holds in a set fX1; :::; Xng M i each
Xi respects S =) T.</p>
      <p>
        De nition Given a threshold minsupp 2 [
        <xref ref-type="bibr" rid="ref1">0, 1</xref>
        ], where the support
supp(X) :=
card(X0)
card(G)
      </p>
      <p>(withX0 := g 2 Gj8m 2 X : (g; m) 2 I);
association rules are determined by mining all pairs X =)
such that</p>
      <p>
        Y of subsets of M
supp(X =) Y ) := supp(X)
is above the threshold minsupp, and the con dence
conf (X =) Y ) :=
supp(X [ Y )
supp(X)
is above a given threshold minconf 2 [
        <xref ref-type="bibr" rid="ref1">0, 1</xref>
        ].
3
      </p>
    </sec>
    <sec id="sec-3">
      <title>Methodology</title>
      <p>We present a novel approach where a natural language sentence is converted
into the sketch (Listing 1.1) which uses deep learning models and then further
using the sketch to construct the database query (SQL) and fetch the output.
This output is then taken to derive some explanations or interesting facts, nd
outliers or exceptions and rationalize the queried data if required ( g:1).
In order to generate the query sketch, we have a pipeline of multiple sequence
tagging deep neural networks: Predicate Finder Model (Select Clause), Entity
Finder Model (Values in Where Clause), Meta Type Model, Operators and
Aggregation Model (all using bi-directional LSTM network along with a CRF
(conditional random eld) output layer), where the natural language sentence is
processed as a sequence tagging problem.</p>
      <p>
        The architecture uses an ELMO embedding that are computed on top of
twolayer bidirectional language models with character convolutions as a linear
function of the internal network states [16]. Also the character-level embedding is
used as it has been found helpful for speci c tasks and to handle the
out-ofvocabulary problem. The character-level representation is then concatenated
with a word-level representation and feed into the bi-directional LSTM as input.
In the next step, a CRF Layer yielding the nal predictions for every word is
s(Z; Y ) =
n
X Qyi;yi+1 +
i=0
n
X Pi;yi
i=1
used [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. We have Z = (z1; z2; :::; zn) as the input sentence and P to be the scores
output by Bi-LSTM network. Qi;j is the score of a transition from tag i to tag
j for the sequence of predictions Y = (y1; y2; :::; yn). Finally the score is de ned
as :
      </p>
      <sec id="sec-3-1">
        <title>Models details</title>
        <p>To generate the query sketch we use four di erent models using the same
architecture (BiLSTM-CRF) [17] explained above, where the natural language
sentence is processed as a sequence tagging problem. The neural network then
predicts the tag for each word using which predicates, entities, and values in the
sentence are identi ed, and an intermediate Sketch (independent of underlying
database) is created. The Sketch is then mapped into the columns of the tables
with conditions to construct the actual SQL query. In the sketch generation
process the order of the models matters as the input of the next model depends on
the output of previous model. To train the models, we had to create the
annotations. In the cases where predicate/entities present in the sentence got the direct
match with columns or values present in the actual database, we extracted them
using a script and in the rest of the cases we have manually annotated the data.
{ Predicate Finder Model(Select Clause): This model identi es the
target concepts (predicates) from the NL sentence. In case of database query
language, predicate refers to the SELECT part of the query. Once
predicates are identi ed, it becomes easier to extract entities from the remaining
sentence.
{ Entity Finder Model(Values in Where Clause): This model identi es
the relations(values/entities) in the query. In some cases the model
misses/capture some words. To tackle this issue predicted value in the Apache-Solr
is searched. The structured data for the domain is assumed to be present in
Lucene. After the search we picked the entity from the database which has
the highest similarity score.
{ Meta Type Model: This model identi es the type of concepts (predicates
and values) at the node or table level. If a concept is present in more than one
table, type information helps in the process of disambiguation. This helps in
making the overall framework domain agnostic.
{ Aggregations and Operators Model: In this model, aggregations and
operators are predicted for predicates and entities respectively. Our framework
currently supports following set of aggregation functions: count, groupby,
min, max, sum, asc sort, desc sort. Similarly, following set of operators are
also supported: =;&gt;;&lt;;&lt;&gt;; ; ;like.</p>
        <p>
          The models are trained independently and do not share any internal
representations. However, the input of one model depends on the previous. For example,
once predicates are identi ed we replace the predicate part in the NL sentence
with some token before passing it to the next model. We capture this
information from the NL sentence and create an intermediate representation (Sketch)
which is further passed to the query generator(neo4j knowledge graphs), to
construct the SQL or another database query and yields results. Result table of the
query is then converted into its equivalent formal context, which is a triplet of
objects, attributes and incidence relation between them. This formal context is
used to extract the implication and association rules [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ] and create a concept
lattice which derives all possible formal concepts from the context and orders
them according to a subconcept-superconcept relationship [15]. This conceptual
hierarchy of the queried output is further used for knowledge discovery that is
implicitly present in it. Here we are focusing on three types of analysis over
queried data from a relational database.
        </p>
        <p>Listing 1.1: Sketch
f
" s e l e c t " :
[
f
" p r e d h i n t " : model
g ,
f
" p r e d h i n t " : h o r s e p o w e r ,
" a g g r e g a t i o n " : d e s c s o r t ,
g ]
" c o n d i t i o n s " :
f
" p r e d h i n t " : c y l i n d e r s ,
" v a l u e " : 4 ,
" o p e r a t o r " : =
g
g
3.1</p>
        <p>Outliers Analysis
This is rst type of analysis that could be perform in the queried output. Outliers
are de ned as rules that contradict common beliefs. These kind of rules can play
an important role in the process of understanding the underlying data as well
as in making critical decisions. Outliers Analysis is to uncover the exceptions
hidden in the given query output. To perform this over the queried output,
we rstly created a preliminary formal context from the given raw data. Then
by using Conexp tool [13], implication and association rules are generated for
complete dataset. These rules shows the correlation among di erent attributes.
After the query is posed, concept lattice of the queried data is created and formal
concepts in the form of (extent, intent) tuple are extracted from it. Intents of
these formal concepts are then compared with the implication and association
rules. If an intent of the queried output is violating any of the implication and
association rules, then it is considered as an outlier for that query.</p>
        <p>NL Query
• Predicate Model
• Entity Extraction</p>
        <p>Model
• Meta-types</p>
        <p>Identification</p>
        <p>Model
• Operator &amp;</p>
        <p>Aggregation
Model</p>
        <p>Sketch S : {
select : [{}],
conditions:[{}}
}</p>
        <p>Outliers
Classification
Explanantions
Transformation</p>
        <p>Word2Vec</p>
        <p>Query Generator
Concept Lattice
Implication and
association Rules
1&lt;instances&gt;atr1,., atrn==&gt;&lt;==v;
2&lt;instances&gt;atr1,., atrn==&gt;&lt;==v;
3&lt;instances&gt;atr1,., atrn==&gt;&lt;==v;
------k&lt;instances&gt;atr1,., atrn==&gt;&lt;==v;</p>
        <p>DB Query:
MATCH _____
WHERE _____
RETURN ____
Formal Context
This is the second type of analysis that we introduced in our framework.
Transformation analysis is used to measure two queries results, where tasks such as
conversion of the underlying lattice structure of one set of query results into
the lattice structure of another set of query results are required. This kind of
analysis is performed by nding the di erence between the intents of the
formal concepts of both lattices. In our framework when two semantically enriched
queries are posed, lattice structures of their respective outputs are generated.
To nd the possible transformation requirements, we match the intents of both
concept lattices and put down the di erences between them. This gives us the
disparity in the kind of objects contained in both the lattices which will help in
transforming one lattice to another.
Classi cation analysis in our framework is done to predict the category of new
objects. This is carried out by de ning a target attribute t in the dataset,
generating concept lattices Ci for each value vi where i 2 N of the target attribute
and then comparing new object's attributes with the intents of each Ci. In this
analysis, a query asking for object details is posed. Lattice structures Ci
corresponding to each vi is stored in the memory. At the run time, matching of new
object's attributes set is done with intents of each Ci. If the intent of new object
is contained in any one of the lattice Cj for some j 2 range(i), then the new
object is classi ed under the corresponding vj category otherwise if more than
one concept lattices contains the new object's intent then our framework cannot
determine its category.
4</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Experiments and Results</title>
      <p>Census Income dataset taken from UCI machine learning repository [14] is used.
This relational database contains 906 observations and 14 features of people like
age, occupation, education, salary, workclass, native country etc. We construct
the Neo4j knowledge graph from the csv ad also generated the implication and
association rules. In this dataset we considered people names as the set of objects
and applied conceptual scaling over the multivalued features mentioned above to
generate the set of attributes where the objects and the attributes has a binary
relation in between them.</p>
      <p>Snapshot of the dataset is:
Implication and association rules extracted from data are:
S.No. rule
1 11th =) 50K
2 State-gov, 5th-6th =) 50K
3 Private, 10th =) 50K
4 Doctorate, State-gov =) &gt;50K
5 Federal-gov, Masters =) &gt;50K
6 Local-gov, 12th =) 50K
7 Bachelors =) &gt;50K
no. of instances
118
45
63
17
41
86
178</p>
      <sec id="sec-4-1">
        <title>1. Outliers Analysis</title>
        <p>Query: List people working more than 60 hours per week and having
exceptions in salary with respect to education.</p>
        <sec id="sec-4-1-1">
          <title>Rules extracted from lattice are:</title>
          <p>S.No. rule
1 Gerrard$[ 50K,Private,France,Prof-school],Gerrard
2 Arbella$[&gt;50K,Private,Greece,10th],Arbella,Greece
3 Amine$[ 50K,Self-emp-not-inc,Vietnam,Bachelors],Amine,Vietnam
4
Arieyonna$[&gt;50K,State-gov,India,Prof-school],Arieyonna,Stategov,India
5 Adarsh$[ 50K,Private,Mexico,Bachelors],Adarsh, Mexico
6 Aadhav$[&gt;50K,Private,United-States,Some-college],Aadhav</p>
        </sec>
        <sec id="sec-4-1-2">
          <title>Outliers</title>
          <p>S.No.
1
2
rule
Arbella$[&gt;50K,Private,Greece,10th],Arbella,Greece
Adarsh$[ 50K,Private,Mexico,Bachelors],Adarsh, Mexico</p>
        </sec>
        <sec id="sec-4-1-3">
          <title>Analysis</title>
          <p>{ Adarsh works &gt;60 hours per week with salary $ 50 K and Bachelors Degree.
{ Arbella works &gt;60 hours per week with salary &gt;$ 50 K and is only 10th grade.
2. Transformation Analysis
Query: What needs to be done to transform workclass, education and salary of
men in Cuba to be like men in England?</p>
        </sec>
        <sec id="sec-4-1-4">
          <title>Intents need to be removed are:</title>
          <p>a) ( 50K, Self-emp-inc, 5th-6th); b) (Private, &gt;50K, Masters); c) ( 50K,
Private, 11th); d) ( 50K, Private, 12th); e)(Private, 50K, 7th-8th); and f) ( 50K,
Private, 1st-4th)</p>
        </sec>
        <sec id="sec-4-1-5">
          <title>Intents need to be introduced are:</title>
          <p>a) (&gt;50K, Masters, Private); b) (Self-emp-inc, Bachelors, &gt;50K); c) (&gt;50K,
Private, HS-grad); d) (Self-emp-not-inc, 50K, HS-grad); e) (Private, 50K,
Masters); f) (Bachelors, &gt;50K, Private); g) (&gt;50K, Masters, Federal-gov); and
h) ( 50K, Doctorate, Private)
It shows: Need of higher Education, Need of Self-Employment.</p>
        </sec>
      </sec>
      <sec id="sec-4-2">
        <title>3. Classi cation Analysis</title>
        <p>Query: Predict that whether Aarav has diabetes or not from his blood pressure,
body mass index and age.</p>
        <p>Person details
enter name
enter age
enter Blood Pressure
enter Body mass index
5</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>Conclusion</title>
      <p>Based on the features of Aarav, it is predicted that he don't have diabetes.
We have described a framework wherein the NL sentence is semantically mapped
into an intermediate logical form (Sketch) using the framework of multiple
sequence tagging networks. This approach of semantic enrichment abstracts the
low level semantic information from sentence and helps in generalising into
various database queries (e.g. SQL, CQL). Answer of these queries are then further
interpreted using FCA to nd out outliers, facts and explanations, classi cations
and transformations. Experimental results shows that how NLQA and FCA can
help an analyst in discovering regularities in a complex data.
11. Peter D. Grnwald: The Minimum Description Length Principle, MIT Press, pages:
3-40, (2007)
12. Bernhard Ganter, Rudolf Wille: Formal Concept Analysis, Mathematical
Foundations, Springer, Berlin,Heidelberg,New York, (1999)
13. Serhiy A. Yevtushenko: System of data analysis:Concept Explorer. (In Russian,
In: Proceedings of the 7th national conference on Arti cial Intelligence KII, pages:
127-134, Russia, (2000)
14. Dua, Dheeru, Gra , Casey: UCI Machine Learning Repository,
http://archive.ics.uci.edu/ml, University of California, Irvine, School of
Information and Computer Sciences, (2017)
15. Ganter B., Wille R.: Formal concept analysis:mathematical foundations. Springer</p>
      <p>Science &amp; Business Media, (2012)
16. Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher
Clark, Kenton Lee, Luke Zettlemoyer: Deep contextualized word representations.</p>
      <p>CoRR, abs/1802.05365, (2018)
17. Xuezhe Ma, Eduard H. Hovy: Endto-end sequence labeling via bi-directional
lstmcnns-crf, CoRR, abs/1603.01354, (2016)</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <given-names>Amit</given-names>
            <surname>Sangroya</surname>
          </string-name>
          , Pratik Saini, Mrinal Rawat, Gautam Shro ,
          <string-name>
            <surname>C.</surname>
          </string-name>
          <article-title>Anantaram: Natural Language Business Intelligence Question Answering through SeqtoSeq Transfer Learning</article-title>
          ,
          <source>In: DLKT: The 1st Paci c Asia Workshop on Deep Learning for Knowledge Transfer</source>
          ,
          <string-name>
            <given-names>PAKDD</given-names>
            ,
            <surname>April</surname>
          </string-name>
          (
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <given-names>Victor</given-names>
            <surname>Zhong</surname>
          </string-name>
          , Caiming Xiong, Richard Socher: Seq2SQL:
          <article-title>Generating Structured Queries from Natural Language using Reinforcement Learning</article-title>
          , https://doi.org/arXiv:
          <fpage>1709</fpage>
          .
          <fpage>00103</fpage>
          , (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Xuezhe</surname>
            <given-names>Ma</given-names>
          </string-name>
          , Eduard H. Hovy:
          <article-title>End-to-end Sequence Labeling via Bi-directional LSTM-CNNs-CRF</article-title>
          .,CoRR,abs/1603.01354, http://arxiv.org/abs/1603.01354,https://doi.org/1603.01354, dblp computer science bibliography, https://dblp.org, (
          <year>2016</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <given-names>Shefali</given-names>
            <surname>Bhat</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Anantaram</surname>
          </string-name>
          , Hemant K. Jain:
          <article-title>Framework for TextBased Conversational User-Interface for Business Applications</article-title>
          . Knowledge Science, Engineering and Management, In: Second International Conference, KSEM Melbourne, Australia, DBLP:conf/ksem/2007, https://doi.org/10.1007/978-3-
          <fpage>540</fpage>
          -76719-0 31, https://doi.org/10.1007/978-3-
          <fpage>540</fpage>
          -76719-0 31, https://dblp.org/rec/bib/conf/ksem/BhatAJ07, November (
          <year>2007</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Loper</surname>
          </string-name>
          , Edward, Bird, Steven: NLTK:
          <article-title>The Natural Language Toolkit</article-title>
          ,
          <source>In: Proceedings of the ACL-02 Workshop on E ective Tools and Methodologies for Teaching Natural Language Processing and Computational Linguistics</source>
          , Volume
          <volume>1</volume>
          , ETMTNLP '
          <volume>02</volume>
          , pages:
          <volume>63</volume>
          {70, https://doi.org/10.3115/1118108.1118117, https://doi.org/10.3115/1118108.1118117, Philadelphia, Pennsylvania, (
          <year>2002</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Manning</surname>
            ,
            <given-names>Christopher D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Surdeanu</surname>
            , Mihai, Bauer, John, Finkel, Jenny, Bethard,
            <given-names>Steven J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>McClosky</surname>
            ,
            <given-names>David: The</given-names>
          </string-name>
          <string-name>
            <surname>Stanford CoreNLP Natural Language Processing Toolkit</surname>
          </string-name>
          , In:
          <article-title>Association for Computational Linguistics (ACL) System Demonstrations</article-title>
          ,pages:
          <volume>55</volume>
          {60, http://www.aclweb.org/anthology/P/P14/P14-5010, (
          <year>2014</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Li</surname>
            , Fei, Jagadish,
            <given-names>H. V.</given-names>
          </string-name>
          :
          <article-title>Constructing an Interactive Natural Language Interface for Relational Databases</article-title>
          ,
          <source>Proc. VLDB Endow., volume: 8</source>
          , pages:
          <volume>73</volume>
          {84, http://dx.doi.org/10.14778/2735461.2735468, https://doi.org/10.14778/2735461.2735468,
          <string-name>
            <given-names>VLDB</given-names>
            <surname>Endowment</surname>
          </string-name>
          ,
          <string-name>
            <surname>September</surname>
          </string-name>
          (
          <year>2014</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Lample</surname>
          </string-name>
          , Guillaume, Ballesteros, Miguel, Subramanian, Sandeep, Kawakami, Kazuya, Dyera, Chris:
          <article-title>Neural Architectures for Named Entity Recognition, In: Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Association for Computational Linguistics</article-title>
          , pages:
          <volume>260</volume>
          {270, https://doi.org/10.18653/v1/
          <fpage>N16</fpage>
          -1030 http://aclweb.org/anthology/N16-1030, San Diego, California, (
          <year>2016</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Dmitry</surname>
            <given-names>I.</given-names>
          </string-name>
          <article-title>Ignatov: Introduction to Formal Concept Analysis and Its Applications in Information Retrieval and Related Fields</article-title>
          , Russian Summer School in Information Retrieval,
          <string-name>
            <surname>December</surname>
          </string-name>
          (
          <year>2015</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <given-names>K</given-names>
            <surname>Sumangali</surname>
          </string-name>
          ,
          <article-title>Ch Aswani Kumar: Determination of interesting rules in FCA using information gain</article-title>
          ,
          <source>In: First International Conference on Networks and Soft Computing (ICNSC2014)</source>
          , IEEE,
          <string-name>
            <surname>August</surname>
          </string-name>
          (
          <year>2014</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>