<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Improved Question Answering using Domain Prediction</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Himani</forename><surname>Srivastava</surname></persName>
							<email>srivastava.himani@tcs.com</email>
							<affiliation key="aff0">
								<orgName type="institution">TCS Research New Delhi</orgName>
								<address>
									<country key="IN">India</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Prerna</forename><surname>Khurana</surname></persName>
							<email>prerna.khurana2@tcs.com</email>
							<affiliation key="aff0">
								<orgName type="institution">TCS Research New Delhi</orgName>
								<address>
									<country key="IN">India</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Saurabh</forename><surname>Srivastava</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">TCS Research New Delhi</orgName>
								<address>
									<country key="IN">India</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Vaibhav</forename><surname>Varshney</surname></persName>
							<email>varshney.v@tcs.com</email>
							<affiliation key="aff0">
								<orgName type="institution">TCS Research New Delhi</orgName>
								<address>
									<country key="IN">India</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Lovekesh</forename><surname>Vig</surname></persName>
							<email>lovekesh.vig@tcs.com</email>
							<affiliation key="aff0">
								<orgName type="institution">TCS Research New Delhi</orgName>
								<address>
									<country key="IN">India</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Puneet</forename><surname>Agarwal</surname></persName>
							<email>puneet.a@tcs.com</email>
							<affiliation key="aff0">
								<orgName type="institution">TCS Research New Delhi</orgName>
								<address>
									<country key="IN">India</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Gautam</forename><surname>Shroff</surname></persName>
							<email>gautam.shroff@tcs.com</email>
							<affiliation key="aff0">
								<orgName type="institution">TCS Research New Delhi</orgName>
								<address>
									<country key="IN">India</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Improved Question Answering using Domain Prediction</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">9CC04C7D45C4AA2D52021F6FC28E78D2</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T05:52+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Question answering over Knowledge Graph</term>
					<term>Triple Input Siamese Network</term>
					<term>Domain Prediction</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Question Answering over Knowledge Graphs has mainly utilised the mentioned entity and the relation to predict the answer. However, a key piece of contextual information that is missing in these approaches is the knowledge of the broad domain (such as sports or music) to which the answer belongs. The current paper proposes to infer the domain of the answer via a pre-trained BERT [10] Classification Model, and utilize the inferred domain as an additional input to yield state-of-the-art performance for single-relation (Sim-pleQuestions) and multi-relation (WebQSP) Question Answering bench-marks. We employ a triple input Siamese network architecture that learns to predict the semantic similarity between the question, the inferred domain, and the relation.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">INTRODUCTION</head><p>Question answering (QA) over large scale knowledge graphs has been the focus of much NLP research and in this paper, we focus on natural language questions that are taken from the SimpleQuestions <ref type="bibr" target="#b6">[7]</ref> and WebQSP <ref type="bibr" target="#b2">[3]</ref> Datasets, that contain tuples of the form (subject, relation, object, question). We tackle the problem of QA in 3 steps : 1) Extraction of mentioned entities from the question and linking to entities in the Knowledge Graph. 2) Detecting the domain of the Object (answer). 3) Prediction of the most relevant relation for answering the question.</p><p>Prior deep learning approaches use relation as a class label only and hence don't capture the semantic level correlation between the question and the relation. To overcome this limitation, we propose a Triple Input Siamese Metric Learning Model (TISML), that scores similarity between questions and candidate relations, and thereby indirectly predicts the relation most relevant to a given question. But this approach was observed to have failed at times when words of candidate relations are highly similar to the words present in the question (discussed in section 8 Type-A). As a result, this tends to mislead the model to predict the relation incorrectly. We, therefore, propose that if the broad domain of the expected answer is also input to the model, the model tends to select relevant relations improving the relation prediction model and this results in state of the art performance. Consider this question from SimpleQuestions Dataset, i.e., "who is a production company that performed Othello".</p><p>Here we first extract the mentioned entity "Othello", using a model (referred to as Entity Tagging Model), and identify all the relations of this entity in the knowledge graph as candidate relations. Considering two of the candidate relations as ("theater/ theater_production/ producing_company", "film/ film/ production_companies"). A model that takes only question and the candidate relation as input predicts "film/ film/ production_companies" as the correct relationship, which is actually wrong. However, if we also input the domain of the answer, "theater", it helps the model to score the candidate relations appropriately and predicts "theater/ theater_production/ producing_company" as the correct relation. The main contributions of this paper are : 1) We demonstrate that a metric learning similarity scoring network along with the injected domain knowledge, enhances Question Answering over the Knowledge Graph. 2) We release the SimpleQuestions and WebQSP datasets<ref type="foot" target="#foot_0">1</ref> created for our experiments to carry out further research.</p><p>Terms mentioned entity, and subject name mean the same thing, and may be used interchangeably.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">PROBLEM DESCRIPTION</head><p>We assume that a background Knowledge Graph comprising of a set of triples (𝑇 = {𝑡 1 , ..., 𝑡 𝑛 }) is available, here each triple 𝑡 𝑖 is represented as a set of three terms {Subject, Relation, Object}, also referred to as {𝑆, 𝑅, 𝑂}. We are concerned with natural language questions (𝑞 𝑖 ∈ 𝑄), which mention an entity of the knowledge graph (𝑆). We also assume that such questions can be answered using single triple (for single relation questions) or multiple triples (for multi-relation questions) of the knowledge graph. For this example, the ground truth triple comprises of subject 𝑆 𝑖 ="Othello", relation 𝑅 𝑖 ="theater/ theater_production/ producing_company", and object 𝑂 𝑖 ="National Theatre of Great Britain". In this context, the objective of the Question Answering task is to retrieve the appropriate answer ("National Theatre of Great Britain") from the knowledge graph.</p><p>We formulate this problem as a supervised learning task. We assume that a set of questions 𝑄 𝑆 = {𝑞 1 , ..., 𝑞 𝑚 } and corresponding ground truth triples 𝑇 𝑆 = {𝑡 1 , ..., 𝑡 𝑚 } (with 𝑡 𝑖 = (𝑠 𝑆 𝑖 , 𝑟 𝑆 𝑖 , 𝑜 𝑆 𝑖 )) are available as training data. The underlying knowledge graph for our work is Freebase <ref type="bibr" target="#b5">[6]</ref>. For Simple Questions dataset we have used a smaller version of Freebase i.e., FB2M <ref type="bibr" target="#b7">[8]</ref> and for WebQSP dataset we have used the full Freebase.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">RELATED WORK</head><p>Mapping a natural language question to a knowledge graph is a well studied task and a significant amount of work has been done on this topic over the last two decades [ <ref type="bibr" target="#b3">[4]</ref>, <ref type="bibr" target="#b22">[23]</ref>, <ref type="bibr" target="#b6">[7]</ref>, <ref type="bibr" target="#b18">[19]</ref>]. As per recent trends, answering natural language queries via knowledge graphs follow two broad approaches namely, "Semantic Parsing based" and "Information Extraction based" which are further explained below.</p><p>• Semantic Parsing based : These approaches [ <ref type="bibr" target="#b3">[4]</ref>, <ref type="bibr" target="#b4">[5]</ref>, <ref type="bibr" target="#b12">[13]</ref>] involve expressing natural language queries into SPARQL queries (logical forms) and then project these queries to a knowledge base to extract relevant facts. The advent of deep learning approaches, which captured the semantics of a natural language query helped to further improve the performance of these systems. The semantics captured through these deep learning approaches are encoded in a fixed-length vector and are projected on a knowledge graph representation to extract relevant facts. • Information Extraction based : Work by <ref type="bibr" target="#b21">[22]</ref> 2 claimed that by using a simple RNN, they are able to obtain better results for both entity tagging and relation detection on SimpleQuestions dataset. Another work by <ref type="bibr" target="#b23">[24]</ref> used Hierarchical BiLSTM based Siamese network for relation prediction and claimed that relation detection task has a direct impact on Question Answering task on both the datsets. Using attention with RNN along with a similarity matrix based CNN has been able to achieve superior results in <ref type="bibr" target="#b0">[1]</ref>. <ref type="bibr" target="#b19">[20]</ref> used a BiLSTM-CRF tagger followed by a BiLSTM to capture mention detection and relation classification respectively. <ref type="bibr" target="#b16">[17]</ref> were among the first ones to apply BERT <ref type="bibr" target="#b9">[10]</ref> for this task but did not get any improvement over the previous state-of-art. <ref type="bibr" target="#b11">[12]</ref> proposed a similar approach to ours using similaritybased network for relation detection, however, they have removed about 2% of data from the test set. To the best of our knowledge, none of the cited approaches utilize domain information to predict relations.</p><p>the sub-type of the subject and place_of_birth is the property or the attribute of that person. So for extracting the domain of the subject for triple (S, R, O), we have to look at the first component of the relation (R). Since we are tagging the domain of the "answer" to every question in the dataset, we search for the domains of the "object" in a (subject, relation, object, question) tuple by finding reverse relation between the subject and object. The process of domain data creation is depicted in figure <ref type="figure" target="#fig_0">1</ref>. Questions which are tagged by single domain, we refer to them as unambiguous questions and questions tagged by None<ref type="foot" target="#foot_2">3</ref> or multiple domains are referred as ambiguous questions. In SimpleQuestions Dataset there were 57,421 unambiguous questions and 16,432 (None) and 2,057 (multiple domain) ambiguous questions. Domain Tagging for ambiguous questions: In order to tag such questions with the appropriate domain, we referred to the tagged domains of unambiguous questions, in the following steps :</p><p>(1) Create a One to One mapping between relations and domains:</p><p>• Create a mapping table between a relation and a domain for every unambiguous question. • Select the most frequently occurring domain for a relation among all the tagged domains. • update the mapping table with a unique domain for a relation (2) Tag domain to ambiguous questions from the mapping table. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5">PROPOSED APPROACH</head><p>We present a schematic diagram of the proposed approach in figure <ref type="bibr" target="#b1">(2)</ref>. Here, given a question 𝑞 with ground truth triples as (𝑠, 𝑟, 𝑜), we first find the mentioned entity or the subject of the question via an Entity Tagging Model. From the identified entity, we obtain all the candidate subjects S = {𝑆 1 , ..., 𝑆 𝑝 } (in figure <ref type="figure">2</ref>, S = {𝑆 1 , 𝑆 2 }) and also extract all the candidate relations R = {𝑅 1 , ..., 𝑅 𝑞 } connected to 𝑆 from the Knowledge Graph (in figure <ref type="figure">2</ref>, R = {𝑅 1 , 𝑅 2 }). We also input this question q to another model which predicts the domain of the expected answer for that question, this model is hereafter referred to as Domain Prediction Model. Further, the question that is input to the TISML Model is modified by inserting a string &lt; 𝑒 &gt; in place of the mentioned entity and yielding a formatted question q'. This was done to ensure that the Siamese Model is agnostic to the specific mentioned entity in the question while predicting the triplet score and could also give the positional information to the neural networks <ref type="bibr" target="#b23">[24]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6">MODEL DESCRIPTION</head><p>In this section, we discuss about all the three individual models in detail.</p><p>(1) Entity Tagging Model: It is a sequence labelling task (IO tagging) which uses a BiLSTM and a Conditional Random Field layer <ref type="bibr" target="#b19">[20]</ref> for detecting the mentioned entity in the question. K entity candidates are predicted using the top-K Viterbi algorithm. Further, candidate aliases are extracted from the Freebase SQL table by querying it using predicted K candidates. Candidates having minimum Levenshtein distance (between aliases and the detected mentioned entity) will be the predicted subject names and their corresponding machine ids will be retrieved as candidate machine ids 4 . This 4 While creating SQL tables for Freebase, along every machine id (MId) different string aliases are mapped. for example:(MId| alias| alias-normalized-punctuation| alias-normalized-punctuation-stem| alias-preprocessed):::</p><formula xml:id="formula_0">(0c1n99q | gulliver|gulliver|gulliv|gulliver)</formula><p>model is used by <ref type="bibr" target="#b19">[20]</ref> <ref type="foot" target="#foot_3">5</ref> , which is the state-of-the-art algorithm in Question Answering task over Knowledge Graph and is used as the baseline algorithm for comparing our results, hence, we also used the same model for our task.</p><p>(2) Domain Prediction Model: It is a supervised classification task, where the input is a question q and the output is the predicted domain of the answer type of the question. For this task, we use a pre-trained BERT Large <ref type="bibr" target="#b9">[10]</ref> classification model and fine tune on SimpleQuestions dataset by adding an additional fully connected layer on top of BERT and learn the weights of this layer to predict the correct domain for the question. We fine-tune the model for 5 epochs and keep sequence length as 40 and batch size as 64. This model outperforms other classification models, namely, LSTM <ref type="bibr" target="#b13">[14]</ref>, CNN <ref type="bibr" target="#b15">[16]</ref>, BiLSTM with attention <ref type="bibr" target="#b10">[11]</ref>, Capsule Network <ref type="bibr" target="#b14">[15]</ref>, results for domain prediction are presented in table <ref type="table" target="#tab_1">1</ref>. This is because of the fact that BERT is trained on a huge corpus (Wikidata (2.5 billion words), BookCorpus (800 million words)) and can thus leverage the knowledge it has learned which results in better prediction of the domains.</p><p>(3) Triple Input Siamese Metric Learning Model: In order to select the correct relation for the question q, we use a TISML Model<ref type="foot" target="#foot_4">6</ref> (refer to figure <ref type="figure" target="#fig_1">3</ref>) which captures the semantics between all the inputs (question, relation, domain). This network consists of 3 different embedding generator networks -Glove Embedding Layer, 1D-CNN Layer and an LSTM Layer. Each input is passed through these networks which further generate their respective embeddings. These embeddings are then concatenated through a Merge layer followed by multiple dense layers. The final embedding computed is used to calculate a score between 0 to 1, that indicates whether the triple has correct relation or not.  <ref type="bibr" target="#b23">[24]</ref> 77 63.9 GRU <ref type="bibr" target="#b17">[18]</ref> 71.2 -BiGRU-CRF &amp; BiGRU <ref type="bibr" target="#b18">[19]</ref> 73.7 -BiLSTM &amp; BiGRU <ref type="bibr" target="#b18">[19]</ref> 74.9 -Attentive RNN &amp; Similarity Matrix based CNN <ref type="bibr" target="#b0">[1]</ref> 76.8 -BiLSTM-CRF &amp; BiLSTM <ref type="bibr" target="#b19">[20]</ref> 78. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="8">RESULTS</head><p>We have compared our approach with previous Deep Learning approaches mentioned in Section 3. Evaluation metrics used are same from the baseline approach <ref type="bibr" target="#b19">[20]</ref> for SimpleQuestions dataset (i.e. accuracy). For WebQSP dataset evalution is done similar to <ref type="bibr" target="#b23">[24]</ref>, where Top-1 accuracy is reported for answer prediction, i.e., among multiple predicted relations we pick the top scored relation and use it for answer prediction 7 . We have also compared with our approach but without using domain information. According to Table <ref type="table" target="#tab_2">2</ref>, it can be seen that augmenting domain knowledge along with 7 For Web Question dataset only 64 questions in test data have multiple answers rest others have multiple relations with single tagged answers We have shown in Table <ref type="table" target="#tab_4">3</ref> a few examples from SimpleQuestions dataset in case of which the relations were predicted wrongly by the baseline approach, however, with our approach such questions could be answered correctly. In our analysis, we found out that such errors were improved in our approach, due to 2 main reasons, which are -</p><p>(1) Type-A (Improvement due to Domain Prediction Model):</p><p>In the question "What high school is located in Hugo", the baseline model predicts the relation as location/ location/ containedby which is not correct, this could be because of the word "located" in the query or due to similar pattern questions belonging to this relation, and hence their model, which is a relation classification model, predicts a relation containing "location". However, our model predicts the domain of this question as "education", since the question is essentially asking about the "high school" which belongs to the education domain. This information pushes the Triple Input Siamese Metric Learning Model to select the relation similar to the education domain which is education/ school_category/ schools_of_this_kind.</p><p>(2) Type-B (Improvement due to Similarity Model): Another type of error that is improved by our approach belongs to the case when the relation predicted by both the approaches is from the same domain, but still different due to varying sub-domain. For instance, for the question "what is a chinese album", the baseline model detects music/ release_track/ release, however our model predicts music/ album_release_type/ albums as the correct relation. This is because of the fact that our model exploits the semantic level correlation between the question and the relation and is able to match the two at a literal level, which can be seen from the fact that there is a presence the word "album" in both the question as well as the relation. (3) Category-3 Error: There are 386 questions in the test set that do not contain a head_entity. Previous work done by <ref type="bibr" target="#b11">[12]</ref> have removed such questions from the evaluation of their model, we, however, did not remove these questions from the dataset. For example, the question "Who is an alumni involved in IT" does not contain a mentioned entity, which is a data creation error and cannot be solved and predicted as None.</p><p>(4) Category-4 Error: These errors occur because Entity Tagging Model is not able to identify the subject present in the question correctly which results into a selection of wrong candidate relations set from the knowledge graph. For example, given a question "what's the name of a popular Japanese to Portuguese dictionary", has the ground truth mentioned entity as "dictionary", however, the Entity Tagging Model predicts "Portuguese" as the subject, which leads to a wrong set of candidate relations and hence results in wrong answer prediction.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="10">CONCLUSION</head><p>In this paper, we propose the use of domain information as an additional information for predicting the correct relation for both single relation and multi-relation datasets. Such information is predicted from the question using a Domain Prediction model and helps in strengthening the outcome of the TISML Model to select the most appropriate relation for the question. Our proposed approach outperforms previous approaches on Question Answering over Knowledge Graph and achieves a new state-of-the-art results on SimpleQuestions and WebQSP datasets. For future work we will also explore datasets like GraphQuestions <ref type="bibr" target="#b20">[21]</ref> and ComplexQuestions <ref type="bibr" target="#b1">[2]</ref> to deal with more aspects of general Question Answering.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Domain Data Creation</figDesc><graphic coords="2,311.98,374.79,252.20,139.82" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 3 :</head><label>3</label><figDesc>Figure 3: Network Architecture of TISML</figDesc><graphic coords="4,311.68,83.69,264.76,158.85" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>Table 1 :</head><label>1</label><figDesc>Domain Prediction Result</figDesc><table><row><cell>Approach</cell><cell>Accuracy</cell></row><row><cell>LSTM</cell><cell>86</cell></row><row><cell>CNN</cell><cell>89</cell></row><row><cell>Bi-LSTM+Attention</cell><cell>91</cell></row><row><cell>Capsule</cell><cell>91</cell></row><row><cell>Fine Tuned Bert</cell><cell>93.16</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_2"><head>Table 2 :</head><label>2</label><figDesc></figDesc><table /><note>: Accuracy on the SimpleQuestions (SQ) and WebQSP (WQ) datasets (*Average Accuracy over 5 runs) Approach SQ WQ HR-BiLSTM &amp; CNN &amp;BiLSTM-CRF</note></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_4"><head>Table 3 :</head><label>3</label><figDesc>Analysis of improvement due to domain prediction and similarity model over baseline model Question Actual Relation Baseline Model Prediction Triple Input Siamese Metric Learning Prediction Predicted Domain Type-A What high school is located in Hugo education/school_category/schools_of_this_kind location/location/containedby</figDesc><table><row><cell>education/school_category/schools_of_this_kind</cell><cell>education</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_5"><head>Table 4 :</head><label>4</label><figDesc>Error AnalysisWhile analysing the errors of the test set, we observed that most errors can be broadly classified into 4 categories. These errors are discussed below, and reported in Table4, examples have been taken from SimpleQuestions Dataset: Category-1 Error: There are plenty of erroneous questions that fall under this category. Even though the Domain Prediction Model predicts the domain correctly, these errors occur due to the highly ambiguous structure of the relations and their tagged questions. To illustrate, the query, "Whats a track from dawn escapes" has music/ release/ track_list as the actual relation while the predicted relation is music/ release/ track. Whereas, another question "What's a track from the release 9 seconds" has music/ release/ track as the actual relation while the predicted relation is music/ release/ track_list, which clearly confuses the Triple Input Siamese Metric Learning Model as the pattern of the questions are identical in nature and the relations are also very similar.(2) Category-2 Error: These type of errors occur because the domain of the question given in Knowledge Graph is vague. There are certain domains in Freebase which do not have a clear definition, for instance, domains such as -Base, Common, User and Type consists of questions that are similar to questions from other domains and these type of questions comprise about 4% of the test data. If we observe questions in Table4from these domains, it is observed that they do not have a common pattern. This misleads the Domain Prediction Model which results in incorrect downstream relation detection and thus the wrong answer. For example, given a question "What is Andrew Deemer's profession", the Domain Prediction Model will predict "people" as the domain and thus the Triple Input Siamese Metric Learning Model predicts people/ person/ profession, whereas the ground relation of this question is common/ topic/ subjects while the ground domain is "common".</figDesc><table><row><cell>Error Type</cell><cell>Question</cell><cell>Actual Relation</cell><cell>Predicted Relation</cell><cell cols="3">Actual Domain Predicted Domain Misclassification</cell></row><row><cell>Category-1</cell><cell>Whats a track from "dawn escapes"</cell><cell>music/release/track_list</cell><cell>music/release/track</cell><cell>music</cell><cell>music</cell><cell>104</cell></row><row><cell></cell><cell>Whats a track from the release 9 seconds</cell><cell>music/release/track</cell><cell>music/release/track_list</cell><cell>music</cell><cell>music</cell><cell>14</cell></row><row><cell></cell><cell>Which album was "brothers sisters" listed on</cell><cell>music/release_track/release</cell><cell>music/recording/release</cell><cell>music</cell><cell>music</cell><cell>112</cell></row><row><cell></cell><cell>which albums contain the track "song: starcandy" ?</cell><cell>music/recording/release</cell><cell>music/release_track/release</cell><cell>music</cell><cell>music</cell><cell>10</cell></row><row><cell>Category-2</cell><cell>What is Andrew deemers profession ?</cell><cell>common/topic/notable_type</cell><cell>people/person/profession</cell><cell>common</cell><cell>people</cell><cell>52</cell></row><row><cell></cell><cell>what is the occupation of hans krása ?</cell><cell>people/person/profession</cell><cell>common/topic/notable_type</cell><cell>people</cell><cell>common</cell><cell>14</cell></row><row><cell></cell><cell cols="2">is deena a male or female base/givennames/given_name/gender</cell><cell>people/person/gender</cell><cell>base</cell><cell>people</cell><cell>106</cell></row><row><cell>Category-3</cell><cell>who is an alumni involved in IT(HEAD=nan)</cell><cell cols="2">common/topic/subjects people/profession/people_with_this_profession</cell><cell>common</cell><cell>people</cell><cell>386</cell></row><row><cell cols="2">9 ERROR ANALYSIS</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell cols="3">• Category-1 Error (Error due to Triple Input Siamese Metric</cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell cols="2">Learning Model)</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell cols="3">• Category-2 Error (Error due to Domain Prediction Model)</cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell cols="3">• Category-3 Error (Unanswerable Questions)</cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell cols="3">• Category-4 Error (Error due to Entity Tagging Model)</cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>(1)</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row></table></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_0">https://drive.google.com/drive/folders/1vkyeg9JEIZBCkQrezguMwwgJDmje6Lq_ ?usp=sharing</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="4" xml:id="foot_1">DATASET DESCRIPTIONSimpleQuestions dataset<ref type="bibr" target="#b8">[9]</ref> is split into the 75,910 train, 10,845 dev, 21,686 test sets and WebQSP dataset is taken from<ref type="bibr" target="#b23">[24]</ref> has 3,116 in train, 1,649 in test and 623 in dev set . We below explain our method for extracting domain information from the Knowledge Graph and creating an input dataset of (Question, Relation, Domain) triples for the TISML model.Domain Data Creation : To extract domain information from the Freebase Knowledge Graph, we observe that the relation of a question represents three pieces of information. E.g., given a relation people/person/ place_of_birth in a triple (S, R, O) of Freebase, people represents the domain of the subject, person represents 2 they have reported 86.8% accuracy but we,<ref type="bibr" target="#b18">[19]</ref>, and<ref type="bibr" target="#b19">[20]</ref> have not been able to replicate their results</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="3" xml:id="foot_2">Questions tagged by "None" indicate no domain has been tagged</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="5" xml:id="foot_3">https://github.com/PetrochukM/Simple-QA-EMNLP-2018</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="6" xml:id="foot_4">Our network is inspired from https://www.linkedin.com/pulse/duplicate-quoraquestion-abhishek-thakur/, this model uses dual input, we add an extra input, i.e., the inferred domain</note>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<monogr>
		<title level="m" type="main">Question Answering over Freebase via Attentive RNN with Similarity Matrix based CNN</title>
		<idno type="arXiv">arXiv:1804.03317</idno>
		<ptr target="http://arxiv.org/abs/1804.03317Withdrawn" />
		<imprint>
			<date type="published" when="2018">2018. 2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Constraint-Based Question Answering with Knowledge Graph</title>
		<author>
			<persName><forename type="first">Junwei</forename><surname>Bao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Nan</forename><surname>Duan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Zhao</forename><surname>Yan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Ming</forename><surname>Zhou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Tiejun</forename><surname>Zhao</surname></persName>
		</author>
		<ptr target="https://www.aclweb.org/anthology/C16-1236" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers. The COLING 2016 Organizing Committee</title>
				<meeting>COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers. The COLING 2016 Organizing Committee<address><addrLine>Osaka, Japan</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2016">2016</date>
			<biblScope unit="page" from="2503" to="2514" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Semantic Parsing on Freebase from Question-Answer Pairs</title>
		<author>
			<persName><forename type="first">Jonathan</forename><surname>Berant</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Andrew</forename><surname>Chou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Roy</forename><surname>Frostig</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Percy</forename><surname>Liang</surname></persName>
		</author>
		<ptr target="https://www.aclweb.org/anthology/D13-1160" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing</title>
				<meeting>the 2013 Conference on Empirical Methods in Natural Language Processing<address><addrLine>Seattle, Washington, USA</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2013">2013</date>
			<biblScope unit="page" from="1533" to="1544" />
		</imprint>
	</monogr>
	<note>Association for Computational Linguistics</note>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Semantic Parsing on Freebase from Question-Answer Pairs</title>
		<author>
			<persName><forename type="first">Jonathan</forename><surname>Berant</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Andrew</forename><surname>Chou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Roy</forename><surname>Frostig</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Percy</forename><surname>Liang</surname></persName>
		</author>
		<ptr target="https://www.aclweb.org/anthology/D13-1160" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing</title>
				<meeting>the 2013 Conference on Empirical Methods in Natural Language Processing<address><addrLine>Seattle, Washington, USA</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2013">2013</date>
			<biblScope unit="page" from="1533" to="1544" />
		</imprint>
	</monogr>
	<note>Association for Computational Linguistics</note>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Semantic Parsing via Paraphrasing</title>
		<author>
			<persName><forename type="first">Jonathan</forename><surname>Berant</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Percy</forename><surname>Liang</surname></persName>
		</author>
		<idno type="DOI">10.3115/v1/P14-1133</idno>
		<ptr target="https://doi.org/10.3115/v1/P14-1133" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics</title>
		<title level="s">Long Papers</title>
		<meeting>the 52nd Annual Meeting of the Association for Computational Linguistics<address><addrLine>Baltimore, Maryland</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2014">2014</date>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="page" from="1415" to="1425" />
		</imprint>
	</monogr>
	<note>Association for Computational Linguistics</note>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Freebase: a collaboratively created graph database for structuring human knowledge</title>
		<author>
			<persName><forename type="first">Kurt</forename><surname>Bollacker</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Colin</forename><surname>Evans</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Praveen</forename><surname>Paritosh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Tim</forename><surname>Sturge</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jamie</forename><surname>Taylor</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">SIGMOD Conference</title>
				<imprint>
			<date type="published" when="2008">2008</date>
			<biblScope unit="page" from="1247" to="1250" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Question Answering with Subgraph Embeddings</title>
		<author>
			<persName><forename type="first">Antoine</forename><surname>Bordes</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Sumit</forename><surname>Chopra</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jason</forename><surname>Weston</surname></persName>
		</author>
		<idno type="DOI">10.3115/v1/D14-1067</idno>
		<ptr target="https://doi.org/10.3115/v1/D14-1067" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics</title>
				<meeting>the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics<address><addrLine>Doha, Qatar</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2014">2014</date>
			<biblScope unit="page" from="615" to="620" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Question Answering with Subgraph Embeddings</title>
		<author>
			<persName><forename type="first">Antoine</forename><surname>Bordes</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Sumit</forename><surname>Chopra</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jason</forename><surname>Weston</surname></persName>
		</author>
		<idno type="DOI">10.3115/v1/D14-1067</idno>
		<ptr target="https://doi.org/10.3115/v1/D14-1067" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics</title>
				<meeting>the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics<address><addrLine>Doha, Qatar</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2014">2014</date>
			<biblScope unit="page" from="615" to="620" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<monogr>
		<title level="m" type="main">Largescale Simple Question Answering with Memory Networks</title>
		<author>
			<persName><forename type="first">Antoine</forename><surname>Bordes</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Nicolas</forename><surname>Usunier</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Sumit</forename><surname>Chopra</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jason</forename><surname>Weston</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1506.02075</idno>
		<ptr target="http://arxiv.org/abs/1506.02075" />
		<imprint>
			<date type="published" when="2015">2015. 2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<monogr>
		<title level="m" type="main">BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding</title>
		<author>
			<persName><forename type="first">Jacob</forename><surname>Devlin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Ming-Wei</forename><surname>Chang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Kenton</forename><surname>Lee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Kristina</forename><surname>Toutanova</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1810.04805</idno>
		<ptr target="http://arxiv.org/abs/1810.04805" />
		<imprint>
			<date type="published" when="2018">2018. 2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Text Classification Research with Attentionbased Recurrent Neural Networks</title>
		<author>
			<persName><forename type="first">Changshun</forename><surname>Du</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Lei</forename><surname>Huang</surname></persName>
		</author>
		<idno type="DOI">10.15837/ijccc.2018.1.3142</idno>
		<ptr target="https://doi.org/10.15837/ijccc.2018.1.3142" />
	</analytic>
	<monogr>
		<title level="j">International Journal of Computers Communications Control</title>
		<imprint>
			<biblScope unit="volume">13</biblScope>
			<biblScope unit="issue">02</biblScope>
			<biblScope unit="page">50</biblScope>
			<date type="published" when="2018">2018. 2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Retrieve and Re-rank: A Simple and Effective IR Approach to Simple Question Answering over Knowledge Graphs</title>
		<author>
			<persName><forename type="first">Vishal</forename><surname>Gupta</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Manoj</forename><surname>Chinnakotla</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Manish</forename><surname>Shrivastava</surname></persName>
		</author>
		<idno type="DOI">10.18653/v1/W18-5504</idno>
		<ptr target="https://doi.org/10.18653/v1/W18-5504" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the First Workshop on Fact Extraction and VERification (FEVER)</title>
				<meeting>the First Workshop on Fact Extraction and VERification (FEVER)<address><addrLine>Brussels, Belgium</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2018">2018</date>
			<biblScope unit="page" from="22" to="27" />
		</imprint>
	</monogr>
	<note>Association for Computational Linguistics</note>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">An End-to-End Model for Question Answering over Knowledge Base with Cross-Attention Combining Global Knowledge</title>
		<author>
			<persName><forename type="first">Yanchao</forename><surname>Hao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Yuanzhe</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Kang</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Shizhu</forename><surname>He</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Zhanyi</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Hua</forename><surname>Wu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jun</forename><surname>Zhao</surname></persName>
		</author>
		<idno type="DOI">10.18653/v1/P17-1021</idno>
		<ptr target="https://doi.org/10.18653/v1/P17-1021" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics</title>
		<title level="s">Long Papers</title>
		<meeting>the 55th Annual Meeting of the Association for Computational Linguistics<address><addrLine>Vancouver, Canada</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2017">2017</date>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="page" from="221" to="231" />
		</imprint>
	</monogr>
	<note>Association for Computational Linguistics</note>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Long Short-term Memory</title>
		<author>
			<persName><forename type="first">Sepp</forename><surname>Hochreiter</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jürgen</forename><surname>Schmidhuber</surname></persName>
		</author>
		<idno type="DOI">10.1162/neco.1997.9.8.1735</idno>
		<ptr target="https://doi.org/10.1162/neco.1997.9.8.1735" />
	</analytic>
	<monogr>
		<title level="j">Neural computation</title>
		<imprint>
			<biblScope unit="volume">9</biblScope>
			<biblScope unit="issue">12</biblScope>
			<biblScope unit="page" from="1735" to="1780" />
			<date type="published" when="1997">1997. 1997</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Text Classification using Capsules</title>
		<author>
			<persName><forename type="first">Jaeyoung</forename><surname>Kim</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Sion</forename><surname>Jang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Sungchul</forename><surname>Choi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Eunjeong Lucy</forename><surname>Park</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Neurocomputing</title>
		<imprint>
			<biblScope unit="volume">376</biblScope>
			<biblScope unit="page" from="214" to="221" />
			<date type="published" when="2018">2018. 2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<monogr>
		<title level="m" type="main">Object Recognition with Gradient-Based Learning</title>
		<author>
			<persName><forename type="first">Yann</forename><surname>Lecun</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Patrick</forename><surname>Haffner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Bengio</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2000-08">2000. 08. 2000</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Pretrained Transformers for Simple Question Answering over Knowledge Graphs</title>
		<author>
			<persName><forename type="first">Denis</forename><surname>Lukovnikov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Asja</forename><surname>Fischer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jens</forename><surname>Lehmann</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">The Semantic Web -ISWC 2019</title>
				<editor>
			<persName><forename type="first">Chiara</forename><surname>Ghidini</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">Olaf</forename><surname>Hartig</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">Maria</forename><surname>Maleshkova</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">Vojtěch</forename><surname>Svátek</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">Isabel</forename><surname>Cruz</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">Aidan</forename><surname>Hogan</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">Jie</forename><surname>Song</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">Maxime</forename><surname>Lefrançois</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">Fabien</forename><surname>Gandon</surname></persName>
		</editor>
		<meeting><address><addrLine>Cham</addrLine></address></meeting>
		<imprint>
			<publisher>Springer International Publishing</publisher>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page" from="470" to="486" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<monogr>
		<title level="m" type="main">Neural Network-based Question Answering over Knowledge Graphs on Word and Character Level</title>
		<author>
			<persName><forename type="first">Denis</forename><surname>Lukovnikov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Asja</forename><surname>Fischer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jens</forename><surname>Lehmann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Sören</forename><surname>Auer</surname></persName>
		</author>
		<idno type="DOI">10.1145/3038912.3052675</idno>
		<ptr target="https://doi.org/10.1145/3038912.3052675" />
		<imprint>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">Strong Baselines for Simple Question Answering over Knowledge Graphs with and without Neural Networks</title>
		<author>
			<persName><forename type="first">Salman</forename><surname>Mohammed</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Peng</forename><surname>Shi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jimmy</forename><surname>Lin</surname></persName>
		</author>
		<idno type="DOI">10.18653/v1/N18-2047</idno>
		<ptr target="https://doi.org/10.18653/v1/N18-2047" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies</title>
				<meeting>the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies<address><addrLine>New Orleans, Louisiana</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2018">2018</date>
			<biblScope unit="volume">2</biblScope>
			<biblScope unit="page" from="291" to="296" />
		</imprint>
	</monogr>
	<note>Association for Computational Linguistics</note>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">SimpleQuestions Nearly Solved: A New Upperbound and Baseline Approach</title>
		<author>
			<persName><forename type="first">Michael</forename><surname>Petrochuk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Luke</forename><surname>Zettlemoyer</surname></persName>
		</author>
		<idno type="DOI">10.18653/v1/D18-1051</idno>
		<ptr target="https://doi.org/10.18653/v1/D18-1051" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing</title>
				<meeting>the 2018 Conference on Empirical Methods in Natural Language Processing<address><addrLine>Brussels, Belgium</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2018">2018</date>
			<biblScope unit="page" from="554" to="558" />
		</imprint>
	</monogr>
	<note>Association for Computational Linguistics</note>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">On Generating Characteristic-rich Question Sets for QA Evaluation</title>
		<author>
			<persName><forename type="first">Yu</forename><surname>Su</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Huan</forename><surname>Sun</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Brian</forename><surname>Sadler</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Mudhakar</forename><surname>Srivatsa</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Izzeddin</forename><surname>Gür</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Zenghui</forename><surname>Yan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Xifeng</forename><surname>Yan</surname></persName>
		</author>
		<idno type="DOI">10.18653/v1/D16-1054</idno>
		<ptr target="https://doi.org/10.18653/v1/D16-1054" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing</title>
				<meeting>the 2016 Conference on Empirical Methods in Natural Language Processing<address><addrLine>Austin, Texas</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2016">2016</date>
			<biblScope unit="page" from="562" to="572" />
		</imprint>
	</monogr>
	<note>Association for Computational Linguistics</note>
</biblStruct>

<biblStruct xml:id="b21">
	<monogr>
		<title level="m" type="main">Simple and Effective Question Answering with Recurrent Neural Networks</title>
		<author>
			<persName><forename type="first">Ferhan</forename><surname>Türe</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Oliver</forename><surname>Jojic</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1606.05029</idno>
		<ptr target="http://arxiv.org/abs/1606.05029" />
		<imprint>
			<date type="published" when="2016">2016. 2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<analytic>
		<title level="a" type="main">Semantic Parsing for Single-Relation Question Answering</title>
		<author>
			<persName><forename type="first">Wen-Tau</forename><surname>Yih</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Xiaodong</forename><surname>He</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Christopher</forename><surname>Meek</surname></persName>
		</author>
		<idno type="DOI">10.3115/v1/P14-2105</idno>
		<ptr target="https://doi.org/10.3115/v1/P14-2105" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics</title>
		<title level="s">Short Papers</title>
		<meeting>the 52nd Annual Meeting of the Association for Computational Linguistics<address><addrLine>Baltimore, Maryland</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2014">2014</date>
			<biblScope unit="volume">2</biblScope>
			<biblScope unit="page" from="643" to="648" />
		</imprint>
	</monogr>
	<note>Association for Computational Linguistics</note>
</biblStruct>

<biblStruct xml:id="b23">
	<analytic>
		<title level="a" type="main">Improved Neural Relation Detection for Knowledge Base Question Answering</title>
		<author>
			<persName><forename type="first">Mo</forename><surname>Yu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Wenpeng</forename><surname>Yin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Kazi</forename><surname>Saidul Hasan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Bing</forename><surname>Cicero Dos Santos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Bowen</forename><surname>Xiang</surname></persName>
		</author>
		<author>
			<persName><surname>Zhou</surname></persName>
		</author>
		<idno type="DOI">10.18653/v1/P17-1053</idno>
		<ptr target="https://doi.org/10.18653/v1/P17-1053" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics</title>
		<title level="s">Long Papers</title>
		<meeting>the 55th Annual Meeting of the Association for Computational Linguistics<address><addrLine>Vancouver, Canada</addrLine></address></meeting>
		<imprint>
			<publisher>Association for Computational Linguistics</publisher>
			<date type="published" when="2017">2017</date>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="page" from="571" to="581" />
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
