<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Enabling natural language analytics over relational data using Formal Concept Analysis</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">C</forename><surname>Anantaram</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution" key="instit1">TCS Research</orgName>
								<orgName type="institution" key="instit2">Tata Consultancy Services Ltd</orgName>
								<address>
									<addrLine>Gwal Pahari</addrLine>
									<settlement>Gurgaon</settlement>
									<country key="IN">India</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Mouli</forename><surname>Rastogi</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution" key="instit1">TCS Research</orgName>
								<orgName type="institution" key="instit2">Tata Consultancy Services Ltd</orgName>
								<address>
									<addrLine>Gwal Pahari</addrLine>
									<settlement>Gurgaon</settlement>
									<country key="IN">India</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Mrinal</forename><surname>Rawat</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution" key="instit1">TCS Research</orgName>
								<orgName type="institution" key="instit2">Tata Consultancy Services Ltd</orgName>
								<address>
									<addrLine>Gwal Pahari</addrLine>
									<settlement>Gurgaon</settlement>
									<country key="IN">India</country>
								</address>
							</affiliation>
						</author>
						<author role="corresp">
							<persName><forename type="first">Pratik</forename><surname>Saini</surname></persName>
							<email>pratik.saini@tcs.com</email>
							<affiliation key="aff0">
								<orgName type="institution" key="instit1">TCS Research</orgName>
								<orgName type="institution" key="instit2">Tata Consultancy Services Ltd</orgName>
								<address>
									<addrLine>Gwal Pahari</addrLine>
									<settlement>Gurgaon</settlement>
									<country key="IN">India</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Enabling natural language analytics over relational data using Formal Concept Analysis</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">39038F6EEC7A4692B26898D9270A08F0</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-25T02:59+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Analysts like to pose a variety of questions over large relational databases containing data on the domain that they are analyzing. Enabling natural language question answering over such data for analysts requires mechanisms to extract exceptions in data, find steps to transform data, detect implications in the data, and apply classifications on the data. Motivated by this problem, we propose a semantically enriched deep learning pipeline that supports natural language question answering over relational databases and uses Formal Concept Analysis to find exceptions, classification and transformation steps. Our framework is based on a set of deep learning sequence tagging networks which extracts information from the NL sentence and constructs an equivalent intermediate sketch, and then maps it into the actual tables and columns of the database. The output data of the query is converted into a lattice structure which results into the (extent,intent) tuples. These tuples are then analyzed to find the exceptions, classification and transformation steps.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">Introduction</head><p>Data analysts have to deal with a large number of complex and nested queries to dig out hidden insights from the relational datasets, spread over multiple files. Extraction of the relevant result corresponding to a given query can be easily done through a deep learnt NLQA framework, but to detect further explanations, facts, analysis and visualizations from queried output is a challenging problem. This kind of data analysis over query's result can be handled by Formal Concept Analysis, a mathematical tool that results in a concept hierarchy, makes semantical relations during the queries, and also can find the implications as well as asociations in the given dataset, can unify data and knowledge and is capable of information engineering as well as data mining. So for enabling NL analytics over such datasets for analysts, we present in this paper, a semantically enriched deep learning pipeline that a) enables natural language question answering over relational databases using a set of deep learnt sequence tagging networks, and b) carries out regularity analysis over the query results using Formal Concept Analysis to interactively explore, discover and analyze the hidden structure in the selected data <ref type="bibr" target="#b11">[12]</ref>  <ref type="bibr" target="#b10">[11]</ref>. The deep learnt sequence tagging pipeline extracts information from the NL sentence and constructs an equivalent intermediate sketch, and then uses that sketch to formulate the actual database query on the relevant tables and columns. Query results are used in Formal Concept Analysis to create a lattice structure of the objects and attributes. The obtained lattice structure is then used to find exceptions in the data, classification of a new object and also to find the set of steps to transform the data from one structure to another structure.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">Formal Concept Analysis</head><p>Formal Concept Analysis provides a theoretical framework for learning hierarchies of knowledge clusters called formal concepts. A basic notion in FCA is the formal context. Given a set G of objects and a set M of attributes (also called properties), a formal context consists of a triple (G, M, I) where I specifies (Boolean) relationships between objects of G and attributes of M , i.e., I ⊆ G × M .Usually, formal contexts are given under the form of a table that formalizes these relationships. A table entry indicates whether an object has the attribute, or not. Let I(g) = {m ∈ M ; (g, m) ∈ I} be the set of attributes satisfied by object g , and let I(m) = {g ∈ G; (g, m) ∈ I} be the set of objects that satisfy the attribute m . Given a formal context (G, M, I) . Two operators () define a Galois connection between the powersets (P(G),⊆) and (P(M),⊆), with A⊆G and B⊆M:</p><formula xml:id="formula_0">A = {m ∈ M |∀g ∈ A : gIm} and B = {g ∈ G|∀m ∈ B : gIm} .</formula><p>That is to say, A is the set of all attributes which is satisfied all objects in A , whereas B is the set of all objects which satisfies all attributes in B . A formal concept of (G,M,I) is defined as a pair (A,B) with A∈G , B∈ M , A =B and B =A. A is called the extent of the formal concept (A,B), whereas B is called the intent.The set of all formal concepts of (G, M, I) equipped with a subconceptsuperconcept partial order ≤ is the concept lattice denoted by L. The and is defined as:</p><formula xml:id="formula_1">For A 1 ,A 2 ⊆ G and B 1 ,B 2 ⊆ M (A 1 , B 1 ) ≤ (A 2 , B 2 ) ⇐⇒ A 1 ⊆ A 2 (equivalenttoB 2 ⊆ B 1 )</formula><p>In this case, the concept (A 1 , B 1 ) is called sub-concept and the concept (A 2 , B 2 ) is called super-concept.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1">Association and Implication Rules</head><p>Given a formal context (G,M,I) there are extracted exact rules and approximate rules (rules with statistical values, for example, support and confidence).</p><p>These rules express in an alternative way the underlying knowledge. These rules are significant as they expresses the underlying knowledge of interaction among attributes.The exact rules are classified as implication rules while the approximation rules are classified as association rules. Definition Given a formal context whose attributes set is M. An implication is an expression S =⇒ T, where S,T ⊆ M. An implication S =⇒ T, extracted from a formal context, or respective concept lattice, have to be such that S ⊆ T . In other words: every object which has the attributes of S, also have the attributes of T. If X is a set of attributes, then X respects an implication S =⇒ T iff S ⊆ X or T ⊆ X. An implication S =⇒ T holds in a set {X 1 , ..., X n } ⊆ M iff each X i respects S =⇒ T. Definition Given a threshold minsupp ∈ [0, 1], where the support</p><formula xml:id="formula_2">supp(X) := card(X ) card(G) (withX := g ∈ G|∀m ∈ X : (g, m) ∈ I),</formula><p>association rules are determined by mining all pairs X =⇒ Y of subsets of M such that</p><formula xml:id="formula_3">supp(X =⇒ Y ) := supp(X)</formula><p>is above the threshold minsupp, and the confidence</p><formula xml:id="formula_4">conf (X =⇒ Y ) := supp(X ∪ Y ) supp(X)</formula><p>is above a given threshold minconf ∈ [0, 1].</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">Methodology</head><p>We present a novel approach where a natural language sentence is converted into the sketch (Listing 1.1) which uses deep learning models and then further using the sketch to construct the database query (SQL) and fetch the output. This output is then taken to derive some explanations or interesting facts, find outliers or exceptions and rationalize the queried data if required (fig: <ref type="figure" target="#fig_1">1</ref>). In order to generate the query sketch, we have a pipeline of multiple sequence tagging deep neural networks: Predicate Finder Model (Select Clause), Entity Finder Model (Values in Where Clause), Meta Type Model, Operators and Aggregation Model (all using bi-directional LSTM network along with a CRF (conditional random field) output layer), where the natural language sentence is processed as a sequence tagging problem. The architecture uses an ELMO embedding that are computed on top of twolayer bidirectional language models with character convolutions as a linear function of the internal network states <ref type="bibr" target="#b15">[16]</ref>. Also the character-level embedding is used as it has been found helpful for specific tasks and to handle the out-ofvocabulary problem. The character-level representation is then concatenated with a word-level representation and feed into the bi-directional LSTM as input.</p><p>In the next step, a CRF Layer yielding the final predictions for every word is used <ref type="bibr">[8]</ref>. We have Z = (z 1 ; z 2 ; ...; z n ) as the input sentence and P to be the scores output by Bi-LSTM network. Q i,j is the score of a transition from tag i to tag j for the sequence of predictions Y = (y 1 ; y 2 ; ...; y n ). Finally the score is defined as :</p><formula xml:id="formula_5">s(Z; Y ) = n i=0 Q yi,yi+1 + n i=1 P i,yi</formula></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Models details</head><p>To generate the query sketch we use four different models using the same architecture (BiLSTM-CRF) <ref type="bibr" target="#b16">[17]</ref> explained above, where the natural language sentence is processed as a sequence tagging problem. The neural network then predicts the tag for each word using which predicates, entities, and values in the sentence are identified, and an intermediate Sketch (independent of underlying database) is created. The Sketch is then mapped into the columns of the tables with conditions to construct the actual SQL query. In the sketch generation process the order of the models matters as the input of the next model depends on the output of previous model. To train the models, we had to create the annotations. In the cases where predicate/entities present in the sentence got the direct match with columns or values present in the actual database, we extracted them using a script and in the rest of the cases we have manually annotated the data.</p><p>- The models are trained independently and do not share any internal representations. However, the input of one model depends on the previous. For example, once predicates are identified we replace the predicate part in the NL sentence with some token before passing it to the next model. We capture this information from the NL sentence and create an intermediate representation (Sketch) which is further passed to the query generator(neo4j knowledge graphs), to construct the SQL or another database query and yields results. Result table of the query is then converted into its equivalent formal context, which is a triplet of objects, attributes and incidence relation between them. This formal context is used to extract the implication and association rules <ref type="bibr" target="#b9">[10]</ref> and create a concept lattice which derives all possible formal concepts from the context and orders them according to a subconcept-superconcept relationship <ref type="bibr" target="#b14">[15]</ref>. This conceptual hierarchy of the queried output is further used for knowledge discovery that is implicitly present in it. Here we are focusing on three types of analysis over queried data from a relational database. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1">Outliers Analysis</head><p>This is first type of analysis that could be perform in the queried output. Outliers are defined as rules that contradict common beliefs. These kind of rules can play an important role in the process of understanding the underlying data as well as in making critical decisions. Outliers Analysis is to uncover the exceptions hidden in the given query output. To perform this over the queried output, we firstly created a preliminary formal context from the given raw data. Then by using Conexp tool <ref type="bibr" target="#b12">[13]</ref>, implication and association rules are generated for complete dataset. These rules shows the correlation among different attributes.</p><p>After the query is posed, concept lattice of the queried data is created and formal concepts in the form of (extent, intent) tuple are extracted from it. Intents of these formal concepts are then compared with the implication and association rules. If an intent of the queried output is violating any of the implication and association rules, then it is considered as an outlier for that query. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2">Transformation Analysis</head><p>This is the second type of analysis that we introduced in our framework. Transformation analysis is used to measure two queries results, where tasks such as conversion of the underlying lattice structure of one set of query results into the lattice structure of another set of query results are required. This kind of analysis is performed by finding the difference between the intents of the formal concepts of both lattices. In our framework when two semantically enriched queries are posed, lattice structures of their respective outputs are generated.</p><p>To find the possible transformation requirements, we match the intents of both concept lattices and put down the differences between them. This gives us the disparity in the kind of objects contained in both the lattices which will help in transforming one lattice to another.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3">Classification analysis</head><p>Classification analysis in our framework is done to predict the category of new objects. This is carried out by defining a target attribute t in the dataset, generating concept lattices C i for each value v i where i ∈ N of the target attribute and then comparing new object's attributes with the intents of each C i . In this analysis, a query asking for object details is posed. Lattice structures C i corresponding to each v i is stored in the memory. At the run time, matching of new object's attributes set is done with intents of each C i . If the intent of new object is contained in any one of the lattice C j for some j ∈ range(i), then the new object is classified under the corresponding v j category otherwise if more than one concept lattices contains the new object's intent then our framework cannot determine its category.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">Experiments and Results</head><p>Census Income dataset taken from UCI machine learning repository <ref type="bibr" target="#b13">[14]</ref> is used. This relational database contains 906 observations and 14 features of people like age, occupation, education, salary, workclass, native country etc. We construct the Neo4j knowledge graph from the csv ad also generated the implication and association rules. In this dataset we considered people names as the set of objects and applied conceptual scaling over the multivalued features mentioned above to generate the set of attributes where the objects and the attributes has a binary relation in between them.</p><p>Snapshot of the dataset is:</p><p>Implication and association rules extracted from data are: </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Analysis</head><p>-Adarsh works &gt;60 hours per week with salary ≤ $ 50 K and Bachelors Degree.</p><p>-Arbella works &gt;60 hours per week with salary &gt;$ 50 K and is only 10th grade.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Transformation Analysis</head><p>Query: What needs to be done to transform workclass, education and salary of men in Cuba to be like men in England? </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Classification Analysis</head><p>Query: Predict that whether Aarav has diabetes or not from his blood pressure, body mass index and age. Based on the features of Aarav, it is predicted that he don't have diabetes.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5">Conclusion</head><p>We have described a framework wherein the NL sentence is semantically mapped into an intermediate logical form (Sketch) using the framework of multiple sequence tagging networks. This approach of semantic enrichment abstracts the low level semantic information from sentence and helps in generalising into various database queries (e.g. SQL, CQL). Answer of these queries are then further interpreted using FCA to find out outliers, facts and explanations, classifications and transformations. Experimental results shows that how NLQA and FCA can help an analyst in discovering regularities in a complex data.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Listing 1 . 1 :</head><label>11</label><figDesc>Sketch{ " s e l e c t " : [ { " p r e d h i n t " : model } , { " p r e d h i n t " : horsepower , " a g g r e g a t i o n " : d e s c s o r t , } ] " c o n d i t i o n s " : { " p r e d h i n t " : c y l i n d e r s , " v a l u e " : 4 , " o p e r a t o r " : = } }</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Fig. 1 :</head><label>1</label><figDesc>Fig. 1: High Level Architecture of the Process</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Fig. 2 :</head><label>2</label><figDesc>Fig. 2: England</figDesc><graphic coords="8,207.48,479.55,200.40,141.30" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head></head><label></label><figDesc>Predicate Finder Model(Select Clause): This model identifies the target concepts (predicates) from the NL sentence. In case of database query language, predicate refers to the SELECT part of the query. Once predicates are identified, it becomes easier to extract entities from the remaining sentence.-Entity Finder Model(Values in Where Clause): This model identifies the relations(values/entities) in the query. In some cases the model misses/capture some words. To tackle this issue predicted value in the Apache-Solr is searched. The structured data for the domain is assumed to be present in Lucene. After the search we picked the entity from the database which has the highest similarity score. -Meta Type Model: This model identifies the type of concepts (predicates and values) at the node or table level. If a concept is present in more than one table, type information helps in the process of disambiguation. This helps in making the overall framework domain agnostic. -Aggregations and Operators Model: In this model, aggregations and operators are predicted for predicates and entities respectively.</figDesc><table><row><cell>Our framework</cell></row><row><cell>currently supports following set of aggregation functions: count, groupby,</cell></row><row><cell>min, max, sum, asc sort, desc sort. Similarly, following set of operators are</cell></row><row><cell>also supported: =;&gt;;&lt;;&lt;&gt;;≥;≤;like.</cell></row></table></figure>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Anantaram: Natural Language Business Intelligence Question Answering through SeqtoSeq Transfer Learning</title>
		<author>
			<persName><forename type="first">Amit</forename><surname>Sangroya</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Pratik</forename><surname>Saini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Mrinal</forename><surname>Rawat</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Gautam</forename><surname>Shroff</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">DLKT: The 1st Pacific Asia Workshop on Deep Learning for Knowledge Transfer</title>
				<imprint>
			<publisher>PAKDD</publisher>
			<date type="published" when="2019-04">April(2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<monogr>
		<title level="m" type="main">Seq2SQL: Generating Structured Queries from Natural Language using Reinforcement Learning</title>
		<author>
			<persName><forename type="first">Victor</forename><surname>Zhong</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Caiming</forename><surname>Xiong</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Richard</forename><surname>Socher</surname></persName>
		</author>
		<ptr target="https://doi.org/arXiv:1709.00103" />
		<imprint>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<monogr>
		<title level="m" type="main">End-to-end Sequence Labeling via Bi-directional LSTM-CNNs-CRF</title>
		<author>
			<persName><forename type="first">Xuezhe</forename><surname>Ma</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Eduard</forename><forename type="middle">H</forename><surname>Hovy</surname></persName>
		</author>
		<idno>CoRR,abs/1603.01354</idno>
		<ptr target="https://dblp.org" />
		<imprint>
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Framework for Text-Based Conversational User-Interface for Business Applications</title>
		<author>
			<persName><forename type="first">C</forename><surname>Shefali Bhat</surname></persName>
		</author>
		<author>
			<persName><surname>Anantaram</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Hemant</surname></persName>
		</author>
		<author>
			<persName><surname>Jain</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-540-76719-031</idno>
		<idno>conf/ksem/2007</idno>
		<ptr target="https://dblp.org/rec/bib/conf/ksem/BhatAJ07" />
	</analytic>
	<monogr>
		<title level="m">Knowledge Science, Engineering and Management</title>
				<meeting><address><addrLine>KSEM Melbourne, Australia</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2007-11">November (2007</date>
		</imprint>
	</monogr>
	<note>Second International Conference</note>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">NLTK: The Natural Language Toolkit</title>
		<author>
			<persName><forename type="first">Edward</forename><surname>Loper</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Steven</forename><surname>Bird</surname></persName>
		</author>
		<idno type="DOI">10.3115/1118108.1118117</idno>
		<ptr target="https://doi.org/10.3115/1118108.1118117" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the ACL-02 Workshop on Effective Tools and Methodologies for Teaching Natural Language Processing and Computational Linguistics</title>
				<meeting>the ACL-02 Workshop on Effective Tools and Methodologies for Teaching Natural Language Processing and Computational Linguistics<address><addrLine>Philadelphia, Pennsylvania</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2002">2002</date>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="page" from="63" to="70" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">David: The Stanford CoreNLP Natural Language Processing Toolkit</title>
		<author>
			<persName><forename type="first">Christopher</forename><forename type="middle">D</forename><surname>Manning</surname></persName>
		</author>
		<author>
			<persName><surname>Surdeanu</surname></persName>
		</author>
		<author>
			<persName><surname>Mihai</surname></persName>
		</author>
		<author>
			<persName><surname>Bauer</surname></persName>
		</author>
		<author>
			<persName><surname>John</surname></persName>
		</author>
		<author>
			<persName><surname>Finkel</surname></persName>
		</author>
		<author>
			<persName><surname>Jenny</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Steven</forename><forename type="middle">J</forename><surname>Bethard</surname></persName>
		</author>
		<author>
			<persName><surname>Mcclosky</surname></persName>
		</author>
		<ptr target="http://www.aclweb.org/anthology/P/P14/P14-5010" />
	</analytic>
	<monogr>
		<title level="m">Association for Computational Linguistics (ACL) System Demonstrations</title>
				<imprint>
			<date type="published" when="2014">2014</date>
			<biblScope unit="page" from="55" to="60" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Constructing an Interactive Natural Language Interface for Relational Databases</title>
		<author>
			<persName><forename type="first">Fei</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">V</forename><surname>Jagadish</surname></persName>
		</author>
		<idno type="DOI">10.14778/2735461.2735468</idno>
		<ptr target="https://doi.org/10.14778/2735461.2735468" />
	</analytic>
	<monogr>
		<title level="m">Proc. VLDB Endow</title>
				<meeting>VLDB Endow</meeting>
		<imprint>
			<date>September</date>
			<biblScope unit="volume">8</biblScope>
			<biblScope unit="page" from="73" to="84" />
		</imprint>
	</monogr>
	<note>VLDB Endowment</note>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Neural Architectures for Named Entity Recognition</title>
		<author>
			<persName><forename type="first">Guillaume</forename><surname>Lample</surname></persName>
		</author>
		<author>
			<persName><surname>Ballesteros</surname></persName>
		</author>
		<author>
			<persName><surname>Miguel</surname></persName>
		</author>
		<author>
			<persName><surname>Subramanian</surname></persName>
		</author>
		<author>
			<persName><surname>Sandeep</surname></persName>
		</author>
		<author>
			<persName><surname>Kawakami</surname></persName>
		</author>
		<author>
			<persName><surname>Kazuya</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Chris</forename><surname>Dyera</surname></persName>
		</author>
		<idno type="DOI">10.18653/v1/N16-1030</idno>
		<ptr target="http://aclweb.org/anthology/N16-1030" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Association for Computational Linguistics</title>
				<meeting>the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Association for Computational Linguistics<address><addrLine>San Diego, California</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2016">2016</date>
			<biblScope unit="page" from="260" to="270" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Introduction to Formal Concept Analysis and Its Applications in Information Retrieval and Related Fields</title>
		<author>
			<persName><forename type="first">I</forename><surname>Dmitry</surname></persName>
		</author>
		<author>
			<persName><surname>Ignatov</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Russian Summer School in Information Retrieval</title>
				<imprint>
			<date type="published" when="2015-12">December (2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Determination of interesting rules in FCA using information gain</title>
		<author>
			<persName><surname>Sumangali</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Aswani</forename><surname>Ch</surname></persName>
		</author>
		<author>
			<persName><surname>Kumar</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">First International Conference on Networks and Soft Computing (ICNSC2014)</title>
				<imprint>
			<publisher>IEEE</publisher>
			<date type="published" when="2014-08">August (2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<monogr>
		<title level="m" type="main">Grnwald: The Minimum Description Length Principle</title>
		<author>
			<persName><forename type="first">D</forename><surname>Peter</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2007">2007</date>
			<publisher>MIT Press</publisher>
			<biblScope unit="page" from="3" to="40" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<monogr>
		<title level="m" type="main">Formal Concept Analysis</title>
		<author>
			<persName><forename type="first">Bernhard</forename><surname>Ganter</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Rudolf</forename><surname>Wille</surname></persName>
		</author>
		<imprint>
			<date type="published" when="1999">1999</date>
			<publisher>Springer</publisher>
			<pubPlace>Berlin,Heidelberg,New York</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">System of data analysis:Concept Explorer</title>
		<author>
			<persName><forename type="first">A</forename><surname>Serhiy</surname></persName>
		</author>
		<author>
			<persName><surname>Yevtushenko</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 7th national conference on Artificial Intelligence KII</title>
				<meeting>the 7th national conference on Artificial Intelligence KII<address><addrLine>Russia</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2000">2000</date>
			<biblScope unit="page" from="127" to="134" />
		</imprint>
	</monogr>
	<note>Russian</note>
</biblStruct>

<biblStruct xml:id="b13">
	<monogr>
		<author>
			<persName><forename type="first">Dheeru</forename><surname>Dua</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Casey</forename><surname>Graff</surname></persName>
		</author>
		<ptr target="http://archive.ics.uci.edu/ml" />
		<title level="m">UCI Machine Learning Repository</title>
				<imprint>
			<date type="published" when="2017">2017</date>
		</imprint>
		<respStmt>
			<orgName>University of California, Irvine, School of Information and Computer Sciences</orgName>
		</respStmt>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<monogr>
		<title level="m" type="main">Formal concept analysis:mathematical foundations</title>
		<author>
			<persName><forename type="first">B</forename><surname>Ganter</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Wille</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2012">2012</date>
			<publisher>Springer Science &amp; Business Media</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<monogr>
		<title level="m" type="main">Deep contextualized word representations</title>
		<author>
			<persName><forename type="first">Matthew</forename><forename type="middle">E</forename><surname>Peters</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Mark</forename><surname>Neumann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Mohit</forename><surname>Iyyer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Matt</forename><surname>Gardner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Christopher</forename><surname>Clark</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Kenton</forename><surname>Lee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Luke</forename><surname>Zettlemoyer</surname></persName>
		</author>
		<idno>CoRR, abs/1802.05365</idno>
		<imprint>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<monogr>
		<author>
			<persName><forename type="first">Xuezhe</forename><surname>Ma</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Eduard</forename><forename type="middle">H</forename><surname>Hovy</surname></persName>
		</author>
		<idno>CoRR, abs/1603.01354</idno>
		<title level="m">Endto-end sequence labeling via bi-directional lstmcnns-crf</title>
				<imprint>
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
