<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Machine Comprehension of Text Using Combinatory Categorial Grammar and Answer Set Programs</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Piotr</forename><surname>Chabierski</surname></persName>
							<email>piotr.chabierski13@imperial.ac.uk</email>
							<affiliation key="aff0">
								<orgName type="department">Department of Computing</orgName>
								<orgName type="institution">Imperial College London</orgName>
								<address>
									<postCode>SW7 2AZ</postCode>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Alessandra</forename><surname>Russo</surname></persName>
							<email>a.russo@imperial.ac.uk</email>
							<affiliation key="aff0">
								<orgName type="department">Department of Computing</orgName>
								<orgName type="institution">Imperial College London</orgName>
								<address>
									<postCode>SW7 2AZ</postCode>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Mark</forename><surname>Law</surname></persName>
							<email>mark.law09@imperial.ac.uk</email>
							<affiliation key="aff0">
								<orgName type="department">Department of Computing</orgName>
								<orgName type="institution">Imperial College London</orgName>
								<address>
									<postCode>SW7 2AZ</postCode>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Krysia</forename><surname>Broda</surname></persName>
							<email>k.broda@imperial.ac.uk</email>
							<affiliation key="aff0">
								<orgName type="department">Department of Computing</orgName>
								<orgName type="institution">Imperial College London</orgName>
								<address>
									<postCode>SW7 2AZ</postCode>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Machine Comprehension of Text Using Combinatory Categorial Grammar and Answer Set Programs</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">EC0457A3AC7E84B9E85A0786B13A6143</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-23T20:59+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>We present an automated method for generating Answer Set Programs from narratives written in English and demonstrate how such a representation can be used to answer questions about text. The proposed approach relies on a transparent interface between the syntax and semantics of natural language provided by Combinatory Categorial Grammars to translate text into Answer Set Programs, hence creating a knowledge base that, together with background knowledge, can be queried.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Introduction</head><p>Machine comprehension of text is a long-term open problem in Artificial Intelligence and can be assessed by a machine's ability to answer questions about passages of text. One of the main challenges is the capacity to automatically process the text and capture the natural language semantics. Various approaches have been proposed in the literature, e.g. <ref type="bibr" target="#b3">(Bos et al. 2004)</ref>, in particular for translating natural language sentences into first-order logic sentences. More recently, <ref type="bibr" target="#b1">(Baral, Dzifcak, and Son 2008)</ref> has followed a di↵erent direction. Motivated by the need of expressing the (default) semantics of normative statements, <ref type="bibr">Baral et al.</ref> have provided a first-step towards an automated way for translating natural language statements into ASP programs, by making use of an intermediate representation, called -ASP, to construct the semantic representation of sentences from its constituents. The -ASP calculus relies on a combinatory categorial grammar (CCG) derivation <ref type="bibr" target="#b12">(Steedman 2000)</ref> to represent the syntactic structure of a sentence. To the best of our knowledge, although e↵ective in expressing normative statements, this approach is mainly confined to a specific dataset of sentences. Consequently, the -ASP representation is limited to the generation of domain-specific semantic representations, and, as also pointed out by the authors in <ref type="bibr" target="#b1">(Baral, Dzifcak, and Son 2008)</ref>, it does not support more complex linguistic constructs such as adverbs or conjunction.</p><p>This paper addresses these limitations by presenting a fully automated method for generating Answer Set Programs (ASP) from narratives written in English in a general and principled manner, demonstrating how such a representation can be used to answer questions about text. Similarly to <ref type="bibr" target="#b1">(Baral, Dzifcak, and Son 2008)</ref>, the proposed approach relies on a transparent interface between syntax and semantics of natural language offered by Combinatory Categorial Grammars to translate the text to ASP. The outcome of this translation creates a form of knowledge base that, together with background knowledge, can later be queried. However, the generation of ASP representations from text uses an intermediate representation, called -ASP ⇤ calculus, that di↵ers from the -ASP calculus proposed in <ref type="bibr" target="#b1">(Baral, Dzifcak, and Son 2008)</ref> in two ways. It is more general and can handle more advanced grammatical and linguistic constructions such as, among others, relativisation, control and raising. It uses a general-purpose ASP representation language designed with the objective of making it suitable for e cient automated learning of common sense knowledge relevant to a given narrative, by means of current state-of-the-art systems for learning ASP programs <ref type="bibr" target="#b10">(Law, Russo, and Broda 2014)</ref>. Due to space limitations, this latter feature is outside the scope of the paper. Our proposed fully automated mechanism is also applicable to the generation of ASP representation of a large class of questions, including polar questions and "wh-word" questions that require single word answers. The main contributions of this paper are therefore: • a general-purpose ASP representation for narratives written in natural language • a -ASP ⇤ calculus that covers complex linguistic phenomena such as coordination, relativisation, control and raising • a wide-coverage algorithm capable of performing translation from natural language to ASP, and • an automatic translation into ASP of questions requiring one-word answers (what, where, who, which). The e↵ectiveness of the approach is demonstrated by using a publicly available dataset <ref type="bibr" target="#b13">(Weston et al. 2015)</ref> as well as an hand-crafted set of sentences, specifically designed to capture the various complex linguistic constructs supported by the approach.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Background</head><p>In what follows, we outline the two main background formalisms used in our approach, namely answer set programming and combinatory categorical grammars.</p><p>Answer Set Programming (ASP) is an expressive language for knowledge representation and reasoning, based on the stable model semantics <ref type="bibr" target="#b6">(Gelfond and Lifschitz 1988)</ref> and rooted in the field of non-monotonic reasoning and logic programming <ref type="bibr" target="#b10">(Lifschitz 2008)</ref>. Within the scope of this paper, Answer Set Programs are restricted to normal logic programs. So, an ASP program is a set of normal rules of the form:</p><formula xml:id="formula_0">r : h |{z} head(r) b 1 , b 2 , . . . , b m , not b m+1 , . . . not b m+k | {z } body(r)</formula><p>where h, b 1 , b 2 , . . . , b m and not b m+1 , . . . not b m+k are positive and negated atoms respectively with not denoting negation-as-failure. A fact is a rule whose body is empty (m = k = 0).</p><p>The Herbrand Base HB(P ) of an ASP program P is the set of all ground atoms constructed using predicate symbols, constants and function symbols in P . An Herbrand interpretation of P assigns a truth value to every atom in HB(P ) and is a model of P if for every ground rule r in P such that body(r) is satisfied head(r) is true. An Herbrand model M of P is minimal if no proper subset of M is a model of P . Given an ASP program P and a set M of ground atoms, the reduct P M of P is the logic program obtained from P by removing every:</p><p>• rule that has a literal not p in its body, where p 2 M</p><p>• negated literal in the bodies of the remaining rules M is an answer set of P if it coincides with the minimal Herbrand model of P M .</p><p>Combinatory Categorial Grammar (CCG) is an e ciently parsable grammar that enables semantic analysis of natural language by providing a transparent interface between syntax and semantics. In CCG, words are assigned syntactic categories defining their syntactic behaviour. The set of CCG categories comprises of atomic categories and complex categories, recursively constructed from the atomic categories. Examples of atomic categories include: N (bare noun), NP (noun phrase) and S (sentence). Complex categories are of the forms X/Y and X\Y and define functors that, when applied to an argument of category Y , produce a result of category X. Directionality of the slash indicates whether the argument has to occur immediately to the left "/" or to the right "\" of the functor. For example, (S\NP )/N P is the category of a transitive verb buy in a sentence Jack bought a car.</p><p>CCG provides a surface-compositional interface between syntax and semantics in which there is one-toone mapping between the rules of syntactic and semantic composition <ref type="bibr" target="#b7">(Hockenmaier and Steedman 2007)</ref>. A set of syntactic combinatory rules specifies how adjacent constituents are combined. The most basic rules are forward (FA) and backward (BA) application:</p><formula xml:id="formula_1">FA (&gt;): X/Y : f Y : a =) X : f (a) BA (&lt;): Y : a X\Y : f =) X : f (a)</formula><p>Composition combinatory rules allow two functor categories to combine and produce another functor. Example of a composition rule is forward composition (FC):</p><formula xml:id="formula_2">FC (&gt; B): X/Y : f Y/Z : g =) X/Z : x.f (g(x))</formula><p>Type-raising combinatory rules allow an argument category Y to become a functor category X/(X\Y ) or X\(X/Y ). Forward type-raising rule (FT) is given by:</p><formula xml:id="formula_3">FT (&gt; T): Y : a =) X/(X\Y ) : f.f (a)</formula><p>Composition together with type-raising rules are used to handle coordination and non-local dependencies introduced, for example, by relative clauses, which is illustrated by the following syntactic derivation for a relative clause: that Jack wanted, from a sentence: The car that Jack wanted was expensive.</p><formula xml:id="formula_4">that (N \N )/(S/N P ) Jack NP S/(S\NP ) &gt; T wanted (S\NP )/N P S\NP &gt; B N \N &gt;</formula><p>The predicate structure can be obtained directly from the syntactic derivation, provided that the semantics of lexical items, assigned by the lexicon, is known. Using a CCG syntactic derivation and first-order logic augmented with -calculus, for a sentence: Jack buys cars derivation of a first-order representation can be performed as follows:</p><p>Jack NP : jack buys (S\NP )/N P : x. y.buy(y, x) cars NP : cars (S\NP ) : y.buy(y, cars) &gt; S : buy(jack, cars) &lt;</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Representing Natural Language in ASP</head><p>The signature of the ASP logical representation used in our approach, to express sentences written in English, consists of four classes of predicates: nominals, events, modifiers and prepositions. The name of each predicate is composed of an arity prefix, which indicates the number of arguments taken by the corresponding word, and a class su x, which takes one of the four aforementioned values. For example, a transitive verb buy is represented by the predicate: binaryEvent(e 1 , buy, c 1 , n 0 ) where the first argument e 1 serves as an identifier, buy is a lemma of the corresponding word and c 1 , n 0 are arguments of the verb (agent and theme respectively).</p><p>For the sake of conciseness, throughout this paper the arity prefixes are skipped and nominals, events, modifiers and prepositions are abbreviated in the predicate names as nom, ev, mod and prep respectively.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Preliminary Text Analysis</head><p>First, the input text is split into sentences and tokens. Subsequently, every word in the input text is tagged with the following set of annotations:</p><p>• Lemma -inflection-free form of the word, used as an argument of the corresponding predicate • Part-of-speech (POS) tag -used by the lexicon to perform coarse division of words into classes • Named entity tag -enriches the predicate structure and allows to treat multi-word proper names as single entities, e.g. Ei↵el Tower as eiffel_tower rather than a nominal and a modifier • Coreference cluster -groups mentions of the same entity in the text. Cluster identifiers are used to form identifiers of nominal predicates corresponding to entity mentions, hence embedding coreference information in the predicate structure • Semantic role -derived for predicate (verb) arguments.</p><p>PropBank sense IDs <ref type="bibr" target="#b8">(Kingsbury and Palmer 2002)</ref> are used to order predicate arguments when generating predicate structure for verbs. For example, in a sentence: [Jack] ARG0 kicked [the ball] ARG1 the transitive verb kicked is translated as: ev(e 1 , kick, c 0 , n 0 ), were c 0 and n 0 are identifiers of Jack and the ball and they are ordered according to their sense IDs (0 and 1 respectively) After annotating the text, a CCG parse tree is derived for each sentence. The CCG parse tree is a binary tree whose leaf nodes correspond to lexical items and internal nodes to phrases, clauses or sentences. Leaves are identified by L as the first argument in the node's descriptor, followed by the CCG category, POS tag and the word itself. Descriptors of the internal nodes begin with the identifier T, followed by the CCG category.</p><p>Listing 1: CCG parse tree for a sentence A new and safe car that Jack wanted to buy was really expensive. -ASP ⇤ Calculus -ASP ⇤ calculus serves as an intermediate representation between natural language and ASP. It follows the syntax of ASP but extends it with abstraction and application as known from -calculus, which enables compositional derivation of semantic representations of phrases and sentences from their constituents. -ASP ⇤ expressions have the following general form:</p><formula xml:id="formula_5">1 &lt;T S[ dcl ]&gt; 2 &lt;T NP &gt; 3 &lt;L ( NP /N) DT A &gt; 4 &lt;T N &gt; 5 &lt;T N &gt; 6 &lt;T (N/N)&gt; 7 &lt;L (N/N) JJ new &gt; 8 &lt;T (( N/N )\( N/N )) &gt; 9 &lt;L CONJ CC</formula><formula xml:id="formula_6">[h 1 , . . . h k ] | {z } k heads v 1 . . . v l | {z } l abstractors . p 1 (a 1 1 , . . . a 1 r1 ), . . . p m (a m 1 , . . . a m rm ), | {z } m predicates w 1 @(a m+1 1 , . . . a m+1 rm+1 ), . . . w n @(a m+n 1 , . . . a m+n rm+n ) | {z } n applications</formula><p>where v i and w j are bound variables, arguments a x y are bound variables or constants and p t are predicate names included in the previously defined signature. Arities of predicates and applications are denoted by r 1 , . . . r m+n . Every predicate has a non-zero arity; however, applications can take no arguments. In case of applications, bound variables w j are referred to as function placeholders and are replaced by other -ASP ⇤ expressions to which arguments a m+j j , . . . a m+j rm+j are applied. Predicates and applications constitute the body of a -ASP ⇤ expression. Every -ASP ⇤ expression has a non-empty set H of bound variables or constants, called expression heads, which corresponds to the linguistic head of a phrase and can have more than one element to account for coordination. As an example, let us consider a -ASP ⇤ expression for a transitive verb buy, given by: [e 0 ] v 1 . v 0 .ev(e 0 , buy, v 0 , v 1 ), (v 1 ), (v 0 ) Constant e 0 is an identifier of the verb buy and also a head of the expression. Bound variables v 0 , v 1 can be replaced, via the process of application (described further in this section), by constants and variableswhen the bound variable occurs as a predicate or application argument, or by other -ASP ⇤ expressionwhen the bound variable is a function placeholder in an application. The list of bound variables is followed by an event predicate and two applications with no arguments, which act as placeholders for the subject and object of the verb buy.</p><p>For -ASP ⇤ expression e, preds(e) and apps(e) denote the sets of all predicates and applications in e respectively and abs(e) denotes the ordered list of bound variables. The set of heads of e is denoted by H e . For p 2 preds(e), args(p) denotes the ordered list of arguments of p; the same notation is used for applications. Given two -ASP ⇤ expressions e and f , the application g = e@f , where v denotes the first bound variable in e, is computed as follows:</p><p>• for every a 2 apps(e) of the form w@(b</p><formula xml:id="formula_7">1 , b 2 , . . . b k ), if v = w, compute recursively f 0 = f @(b 1 , b 2 , . . . b</formula><p>k ), which is equivalent to (((f @b 1 )@b 2 ) . . .)@b k , include preds(f 0 ) and apps(f 0 ) in g, and set f = f 0 . If v 2 args(a), |H f | copies of a are included in g and v in the i th copy is replaced by the i th head of f . If v 6 = w and v 6 2 args(a), unmodified a is included in g • for every p 2 preds(e), if v 2 args(p), |H f | copies of p are included in g and v in the i th copy is replaced by the i th head of f . If v 6 2 p, p is included in g unmodified</p><p>• bound variables of g are given by: abs(g) = (abs(e) {v}) + abs(f ), where ' ' denotes removal of an element from a list and '+' denotes list concatenation</p><formula xml:id="formula_8">• if v 2 H e , H g = (H e \{v}) [ H f . Otherwise, H g = H e</formula><p>Constant c is represented by an expression e such that preds(e) = apps(e) = ;, abs(e) = [ ] and H e = {c}. For variable v, expression e is modified so that abs(e) = [v].</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Lexicon</head><p>In our approach, lexicon is an algorithm, which given an input word together with the relevant annotations (CCG category, POS tag, lemma, coreference cluster) derives the corresponding -ASP ⇤ expression. -ASP ⇤ expressions for content words and some prepositions consist of a single predicate and a di↵erent number of bound variables and applications. Each word is assigned to one of five sub-lexicons (nominal, event, modifier, preposition, wh-word) based on the POS tag and CCG category, which in turn determines the class of the corresponding predicate. The arity of a predicate, for all cases other than adverbs and some function words, is equal to the number of arguments of the CCG category of the corresponding word.</p><p>Co-indexation is performed for the relevant CCG categories, such as the ones corresponding to raising and control verbs or relative pronouns. Our approach performs co-indexation for all categories listed in (Hockenmaier and Steedman 2005) and it is realised via application primitive of -ASP ⇤ expressions. For example, for a control verb wanted with a co-indexed CCG category: (S[dcl]\NP i )/(S[to]\NP i ) the -ASP ⇤ expression is: v 1 . v 2 .ev(e 1 , want, v 2 , v 1 ), v 1 @v 2 . Cases when coindexation has to be performed are identified by pattern matching on CCG categories.</p><p>Application primitive is also used to translate coordinate sentences whose constituents have the same CCG category. For example, in case of an adjective phrase new and safe the -ASP ⇤ expression for coordinator and is:</p><formula xml:id="formula_9">[v 2 , v 1 ] v 2 . v 1 . v 0 .(v 2 )@(v 0 ), (v 1 )@(v 0 ),</formula><p>which is inferred from the CCG categories of the conjuncts, namely (N/N ). In case of conjuncts that take more than one argument, such as transitive verbs buy and sell in the sentence: Jack buys and sells oil, the number of arguments of applications occurring in the -ASP ⇤ expression is set accordingly:</p><formula xml:id="formula_10">[v 3 , v 2 ] v 3 . v 2 . v 1 . v 0 .(v 3 )@(v 1 , v 0 ), (v 2 )@(v 1 , v 0 ).</formula><p>To ensure the same ordering of verb arguments for di↵erent grammatical constructions (for example passive vs. active voice) the arguments are ordered according to their PropBank semantic role labels.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Word -ASP Expression</head><formula xml:id="formula_11">a [v 0 ] v 0 .(v 0 )@(n 0 ) new [v 0 ] v 0 .mod(new, v 0 ), (v 0 ) and [v 2 , v 1 ] v 2 . v 1 . v 0 .(v 2 )@(v 0 ), (v 1 )@(v 0 ) safe [v 0 ] v 0 .mod(saf e, v 0 ), (v 0 ) car [v 0 ] v 0 .nom(v 0 , car), (v 0 ) that [v 0 ] v 1 . v 0 .(v 1 )@(v 0 ) Jack [c 0 ]nom(c 0 , jack) really [v 0 ] v 0 .mod(really, v 0 ), (v 0 ) wanted [e 0 ] v 1 . v 0 .ev(e 0 , want, v 0 , v 1 ), (v 1 )@(v 0 ) to [v 0 ] v 0 .v 0 buy [e 1 ] v 1 . v 0 .ev(e 1 , buy, v 0 , v 1 ), (v 1 ), (v 0 ) was [v 0 ] v 0 .(v 0 ) expensive [expensive] v 0 .mod(expensive, v 0 ), (v 0 )</formula><p>Table <ref type="table">1</ref>: Lexicon entries for the lexical items from the example sentence.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Semantic Composition</head><p>The -ASP ⇤ expression for a given phrase or sentence is constructed bottom-up following the structure of the CCG parse tree. For each leaf node, a -ASP ⇤ expression is assigned by the lexicon. For internal nodes, the relevant CCG combinatory rule is identified, based on the categories of the child nodes and used to derive the -ASP ⇤ expression compositionally. All combinatory rules are realised using the -ASP ⇤ expression: v 0 . v 1 .(v 1 )@(v 0 ). In case of type-raising rules, child node's -ASP ⇤ expression is assigned to v 0 . For application and composition rules child nodes' -ASP ⇤ expressions are identified as function and argument based on their CCG categories, the former is assigned to v 1 and the latter to v 0 .</p><p>In our running example, the -ASP ⇤ expression for a noun phrase new and safe car is derived in three steps. Referring to Table <ref type="table">1</ref>, semantic representation for node in line 9 in Listing 1 is derived as follows:</p><formula xml:id="formula_12">{[v 2 , v 1 ] v 2 . v 1 . v 0 .(v 2 )@(v 0 ), (v 1 )@(v 0 )}@{ [v 3 ] v 3 .mod(saf e, v 3 ), (v 3 )} ⌘ [v 0 , v 1 ] v 1 . v 0 .mod(saf e, v 0 ), (v 0 ), (v 1 )@(v 0 )</formula><p>For the node in line 7, we get the following expression:</p><formula xml:id="formula_13">{[v 0 , v 1 ] v 1 . v 0 .mod(saf e, v 0 ), (v 0 ), (v 1 )@(v 0 )}@{ [v 2 ] v 2 .mod(new, v 2 ), (v 2 )} ⌘ [v 0 ] v 0 .mod(saf e, v 0 ), mod(new, v 0 ), (v 0 )</formula><p>Finally, for the node in line 5, we get:</p><formula xml:id="formula_14">{[v 0 ] v 0 .mod(saf e, v 0 ), mod(new, v 0 ), (v 0 )}@{ [v 1 ] v 1 .nom(v 1 , car), (v 1 )} ⌘ [v 1 ] v 1 .mod(saf e, v 1 ), mod(new, v 1 ), nom(v 1 , car), (v 1 )</formula></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Translation of Questions</head><p>Our approach for generating ASP representations from text is also applicable to polar questions and questions that require one word answer, such as questions starting with wh-words: what, where, who and which.</p><p>For polar questions, the same translation approach, described so far, is used to derive the corresponding -ASP ⇤ expressions with the only di↵erences being that (inflected) auxiliary verbs: be, do and have are discarded (translated as the identity -ASP ⇤ expression: [v 0 ] v 0 .v 0 ) and constants in the body of the -ASP ⇤ expression which do not correspond to definite noun phrases are replaced with variables. The generated predicates of -ASP ⇤ expression form the body of an ASP query rule: q body. Two additional rules of the form ans(yes) q and ans(no) not q are included to capture the yes and no answers respectively. If all answer sets of the generated ASP program include the same ground instance of the fact ans, then the corresponding value (namely yes or no) is returned as an answer. Otherwise the answer UNKNOWN is returned.</p><p>In case of wh-questions requiring a one-word answer, a query predicate is added, whose purpose is to map the identifier to the actual word corresponding to the answer. A rule, similar as to the case of polar questions, is also included, but this time with an additional literal: ans(W ) nom(I, W ), body. In case the ans(•) predicate is not in any model of the program, UNKNOWN answer is returned.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>-ASP ⇤ to ASP Conversion</head><p>A -ASP ⇤ representation, generated from a given text, may or may not contain bound variables. Hence, the conversion of a -ASP ⇤ into an ASP representations has to consider two cases.</p><p>In the first case, no bound variables are present. Hence, the resultant -ASP ⇤ expression consists exclusively of instantiated predicates -ones for which all predicate arguments are constants. Therefore, each predicate is converted directly to an ASP fact. The -ASP ⇤ expression derived for our running example falls under this category, hence the following ASP program is generated: {nom(c 1 , jack). nom(n 0 , car). mod(expensive, n 0 ). mod(saf e, n 0 ). mod(new, n 0 ). mod(really, e 0 ). ev(e 0 , want, c 1 , e 1 ). ev(e 1 , buy, c 1 , n 0 ).} In the second case, the list of bound variables in the resultant -ASP ⇤ expression is non-empty, the expression is converted to a set of normal rules and a set of facts, which corresponds to universal and existential quantification of the given sentence. For each rule the head is the predicate whose identifier is equal to one of the heads of the -ASP ⇤ expression. The body of each rule is composed incrementally by including every predicate from the -ASP ⇤ expression (di↵erent from the head predicate) which has a variable among its arguments that is also an argument of either the head of the given rule or some other predicate already included in the body. A set of facts, corresponding to the existential quantification, is also derived by applying Skolem constants to the -ASP ⇤ expression. As an example, let us consider a sentence: Jack and Jim enjoy opera, with the -ASP ⇤ expression given by: [e 0 ] v 1 .nom(c 1 , jack), nom(c 2 , jim), nom(v 1 , opera), ev(e 0 , enjoy, c 1 , v 1 ), ev(e 0 , enjoy, c 2 , v 1 ) Using the previously described procedure, the -ASP ⇤ expression is converted to the following ASP program:</p><p>{ev(e 0 , enjoy, c 1 , X) nom(X, opera). ev(e 0 , enjoy, c 2 , X) nom(X, opera). nom(c 1 , jack). nom(c 2 , jim). nom(f 1 , opera). ev(e 0 , enjoy, c 1 , f 1 ). ev(e 0 , enjoy, c 2 , f 1 ).} The first two rules of the above program can be intuitively read as: Jack (Jim) enjoys everything that is an opera. The last two facts state that: there exists an opera that Jack and Jim enjoy and they allow us to answer questions such as: What does Jack (Jim) enjoy?</p><p>The truth value of the predicate fluent(T, L, X, Y ) persists and can change over timepoints.</p><p>It is parametrised by the timepoint T , lemma L of the corresponding word, which is the name of the fluent, and arguments of the word. For example, fluent(2, be, c 1 , n 2 ), where c 1 and n 2 are identifiers of Jim and kitchen means that Jim is in the kitchen at time 2 and he will continue to be there until termination. Words that are treated as fluents are specified by providing initiation and termination rules in the background knowledge.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Discussion</head><p>The dataset used for evaluating our approach consists of 22 sentences and short stories, 10 of which were handcrafted, 5 were taken from <ref type="bibr" target="#b0">(Baral and Dzifcak 2012)</ref>, 5 from online news articles and 2 from STEP 2008 shared tasks dataset <ref type="bibr" target="#b4">(Bos 2008a)</ref>. Regarding the hand-crafted examples, correct representations were generated for 9 out of 10. The remaining 12 examples consist of 18 sentences and a correct predicate structure was generated for 14 of them and for the other 4 the main source of errors was the CCG parser. Among the 14 sentences, the representations were fully correct for 9 of them and for the other 5 some predicates were incorrectly parametrised due to errors in coreference resolution, interpretation of noun phrases (distributive vs collective), semantics of prepositions and lack of support for expletive sentences and di↵erent types of elliptical constructions. The results of the evaluation confirm that our approach can correctly represent sentences with coordination and non-local dependencies, use coreference information to resolve personal and possessive pronouns and capture the semantics of di↵erent parts of speech.</p><p>The question answering functionality of the system was evaluated on The (20) QA bAbI tasks <ref type="bibr" target="#b13">(Weston et al. 2015</ref>) that we also used for learning common sense knowledge using Inductive Logic Programming, which is however outside the scope of this paper. Our system was evaluated on 10 tasks <ref type="bibr">(tasks number: 1, 2, 5, 6, 8, 9, 12, 15, 16, 18)</ref> which, among others, required correct representation of conjunction and generic sentences as well as temporal and default reasoning to be answered correctly. Using general (non task-specific) background knowledge our system achieved 100.0% accuracy for 9 tasks and 93.6% for the other task, which was task 16. The lower accuracy on that task was due to its requirement for task specific background knowledge.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Related Work</head><p>The two approaches most related to out work are the Boxer system <ref type="bibr" target="#b3">(Bos et al. 2004</ref>) and the -ASP calculus described in <ref type="bibr" target="#b1">(Baral, Dzifcak, and Son 2008)</ref>. The Boxer system relies on C&amp;C syntactic parser, which uses CCG and Discourse Representation Theory (DRT) to generate first-order logic representations from texts written in English. The system achieves more than 95% coverage of English newspaper texts <ref type="bibr" target="#b5">(Bos 2008b</ref>) and the generated logical representations are used as an input to a theorem prover for recognising textual entailment <ref type="bibr" target="#b2">(Bos and Markert 2005)</ref>. The Boxer system uses -DRS (Discourse Representation Structure) formalism to derive semantic representation compositionally. The first step of the translation algorithm is assignment of -DRS expressions to the leaf nodes of the parse tree generated for the sentence by the C&amp;C parser. The lexicon is derived by manually specifying -DRS expressions for the majority of the 409 CCG categories used by the C&amp;C parser. Then, the combinatory rules, expressed in terms of -DRS expressions are used to derive the semantics for the internal nodes in a bottom-up manner as dictated by the structure of the parse tree. The method for constructing semantic representations used by the system is a general-purpose one and can be applied to formalisms other than DRT. Finally, the -DRS is translated into first-order logic. Our approach uses instead a di↵erent formalism, namely ASP rather than first-order logic, which makes it arguably easier to perform inferences necessary in the task of question answering. With regards to the translation algorithm, both our approach and the Boxer system generate the lexicon based on a set of hand-crafted rules which rely on the CCG category, part-of-speech tag, and lemma of the given word.</p><p>The -ASP calculus described in <ref type="bibr" target="#b1">(Baral, Dzifcak, and Son 2008</ref>) is used to translate English sentences with normatives and exceptions into ASP. -ASP expressions, similarly -DRS used by Boxer, extend the underlying formalism with application and abstraction as known from -calculus. Then, -ASP expressions are used to build semantic representation of phrases and sentences compositionally. The -ASP expressions, as presented in <ref type="bibr" target="#b1">(Baral, Dzifcak, and Son 2008)</ref>, is constrained to a small subset of English consisting of nouns, verbs and adjectives and does not support, among others, adverbs, prepositions and conjunctions. In the subsequent work <ref type="bibr" target="#b0">(Baral et al. 2011)</ref>, a method for automatic generation of lexicon entries from training data specified as pairs (S i , L i ) where S i is the sentence and L i the corresponding logical representation has been proposed. Although the approach achieves favorable results on Geoquery database querying dataset and on a dataset of logical puzzles <ref type="bibr" target="#b0">(Baral and Dzifcak 2012)</ref>, the generated logical representations are still domainspecific and the proposed translation algorithm does not handle binding and linguistic phenomena such as coreference or non-local dependencies. In comparison, our approach o↵ers a systematic treatment of adverbs, coordination and non-local dependencies, which to the best of our knowledge, <ref type="bibr" target="#b1">(Baral, Dzifcak, and Son 2008)</ref> is not capable of. Di↵erent from -ASP, our approach does not use a domain-specific logical representation, bur rather a wide-coverage semantic parser, similarly to the Boxer system.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Conclusion</head><p>In summary, we have presented a novel approach for automatically generating ASP representation from text and questions with single word answers, which makes use of the CCG's transparent interface between syntax and semantics of natural language. To support a widecoverage semantic parsing of text, we have generalised the existing formalism of -ASP calculus into a generalpurpose -ASP ⇤ calculus capable of handling additional complex linguistic constructs. We have also shown how additional common-sense background knowledge about persistence and exceptions can be represented and related with the generated ASP representation of the text in order to support the answers to questions that are not directly expressed in text. We are currently exploring the integration into our approach of current stateof-the-art systems for learning ASP programs <ref type="bibr" target="#b10">(Law, Russo, and Broda 2016)</ref> in order to automatically learn, instead of hand-crafting, relevant common sense knowledge from examples of question-answer pairs. Our preliminary results on this line of current work have been very promising.</p></div>		</body>
		<back>
			<div type="annex">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Exceptions and Default Persistence</head><p>Natural language expressions are often associated with implicit information that does not follow directly from their literal meaning and requires background knowledge to be interpreted correctly. To associate words in the text with their interpretations, a semantic predicate is introduced for each predicate in the chosen signature. For example, for the binary event predicate, the semantic predicate is defined as: {sem ev(E, L, X, Y ) ev(E, L, X, Y ), not ab ev(E, L, X, Y )}. Definitions of semantic predicates specify an exception structure. Consider for instance the implicit negation introduced by the verb forget, like in the sentence: Jim forgot to take an umbrella. This implicit negation can be captured by the following rule: {ab ev(E 0 , L, X, Y ) sem ev(E 1 , forget, X, E 0 ), ev(E 0 , L, X, Y )}. When applied to the above example sentence, the rule has the following grounding: {ab ev(e 0 , take, c 1 , n 1 ) sem ev(e 1 , forget, c 1 , e 0 ), ev(e 0 , take, c 1 , n 1 )}, where c 1 and n 1 correspond to Jim and umbrella respectively.</p><p>Understanding and keeping track of persistence of fluents over time plays an important role in text comprehension. In our approach we assume a linear time structure in the narratives. Verbs occurring in the text are associated with implicit time points which are represented in the generated ASP program by ground instances of the predicate time(T, E). The argument T is a positive number equal to the position of the verb, identified by E, in the text relative to the other verbs. Persistence is captured using a set of rules motivated by Event Calculus <ref type="bibr" target="#b9">(Kowalski and Sergot 1986)</ref>, and formalised as follows: </p></div>			</div>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Solving puzzles described in English by automated translation to answer set programming and learning how to do that translation</title>
		<author>
			<persName><forename type="first">C</forename><surname>Baral</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Dzifcak</surname></persName>
		</author>
		<author>
			<persName><surname>Baral</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">A</forename><surname>Gonzalez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Zhou</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Thirteenth International Conference on Principles of Knowledge Representation and Reasoning</title>
				<meeting>the Thirteenth International Conference on Principles of Knowledge Representation and Reasoning<address><addrLine>Stroudsburg, PA, USA</addrLine></address></meeting>
		<imprint>
			<publisher>Association for Computational Linguistics</publisher>
			<date type="published" when="2011">2012. 2011</date>
			<biblScope unit="page" from="35" to="44" />
		</imprint>
	</monogr>
	<note>Proceedings of the Ninth International Conference on Computational Semantics, IWCS &apos;11</note>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Using Answer Set Programming and lambda calculus to characterize natural language sentences with normatives and exceptions</title>
		<author>
			<persName><forename type="first">C</forename><surname>Baral</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Dzifcak</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><forename type="middle">C</forename><surname>Son</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 23rd National Conference on Artificial Intelligence</title>
				<meeting>the 23rd National Conference on Artificial Intelligence</meeting>
		<imprint>
			<publisher>AAAI Press</publisher>
			<date type="published" when="2008">2008</date>
			<biblScope unit="volume">2</biblScope>
			<biblScope unit="page" from="818" to="823" />
		</imprint>
	</monogr>
	<note>of AAAI &apos;</note>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Recognising textual entailment with logical inference</title>
		<author>
			<persName><forename type="first">J</forename><surname>Bos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Markert</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Conference on Human Language Technology and Empirical Methods in Natural Language Processing, HLT &apos;05</title>
				<meeting>the Conference on Human Language Technology and Empirical Methods in Natural Language Processing, HLT &apos;05<address><addrLine>Stroudsburg, PA, USA</addrLine></address></meeting>
		<imprint>
			<publisher>Association for Computational Linguistics</publisher>
			<date type="published" when="2005">2005</date>
			<biblScope unit="page" from="628" to="635" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Wide-coverage semantic representations from a CCG parser</title>
		<author>
			<persName><forename type="first">J</forename><surname>Bos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Clark</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Steedman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">R</forename><surname>Curran</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Hockenmaier</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 20th International Conference on Computational Linguistics, COLING &apos;04</title>
				<meeting>the 20th International Conference on Computational Linguistics, COLING &apos;04<address><addrLine>Stroudsburg, PA, USA</addrLine></address></meeting>
		<imprint>
			<publisher>Association for Computational Linguistics</publisher>
			<date type="published" when="2004">2004</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Introduction to the shared task on comparing semantic representations</title>
		<author>
			<persName><forename type="first">J</forename><surname>Bos</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2008 Conference on Semantics in Text Processing, STEP &apos;08</title>
				<meeting>the 2008 Conference on Semantics in Text Processing, STEP &apos;08<address><addrLine>Stroudsburg, PA, USA</addrLine></address></meeting>
		<imprint>
			<publisher>Association for Computational Linguistics</publisher>
			<date type="published" when="2008">2008a</date>
			<biblScope unit="page" from="257" to="261" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Wide-coverage semantic analysis with Boxer</title>
		<author>
			<persName><forename type="first">J</forename><surname>Bos</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2008 Conference on Semantics in Text Processing, STEP &apos;08</title>
				<meeting>the 2008 Conference on Semantics in Text Processing, STEP &apos;08<address><addrLine>Stroudsburg, PA, USA</addrLine></address></meeting>
		<imprint>
			<publisher>Association for Computational Linguistics</publisher>
			<date type="published" when="2008">2008b</date>
			<biblScope unit="page" from="277" to="286" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">The stable model semantics for logic programming</title>
		<author>
			<persName><forename type="first">M</forename><surname>Gelfond</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Lifschitz</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Logic Programming: Proceedings of the 5th International Conference</title>
				<editor>
			<persName><forename type="first">J</forename><surname>Hockenmaier</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">M</forename><surname>Steedman</surname></persName>
		</editor>
		<imprint>
			<publisher>MIT Press</publisher>
			<date type="published" when="1988">1988. 2005</date>
			<biblScope unit="page" from="1070" to="1080" />
		</imprint>
		<respStmt>
			<orgName>Department of Computer and Information Science, University of Pennsylvania</orgName>
		</respStmt>
	</monogr>
	<note type="report_type">Technical report</note>
	<note>CCGbank: User&apos;s manual</note>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">CCGbank: A corpus of CCG derivations and dependency structures extracted from the Penn Treebank</title>
		<author>
			<persName><forename type="first">J</forename><surname>Hockenmaier</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Steedman</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Comput. Linguist</title>
		<imprint>
			<biblScope unit="volume">33</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page" from="355" to="396" />
			<date type="published" when="2007">2007</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">From TreeBank to PropBank</title>
		<author>
			<persName><forename type="first">P</forename><surname>Kingsbury</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Palmer</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Third International Conference on Language Resources and Evaluation</title>
				<imprint>
			<date type="published" when="2002">2002</date>
		</imprint>
	</monogr>
	<note>LREC-02</note>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">A logic-based calculus of events</title>
		<author>
			<persName><forename type="first">R</forename><surname>Kowalski</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Sergot</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">New Generation Computing</title>
		<imprint>
			<biblScope unit="volume">4</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="67" to="95" />
			<date type="published" when="1986">1986</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Joint A* CCG parsing and semantic role labelling</title>
		<author>
			<persName><forename type="first">M</forename><surname>Law</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Russo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Broda</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Lewis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>He</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Zettlemoyer</surname></persName>
		</author>
		<author>
			<persName><surname>Lifschitz</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1608.01946</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 23rd National Conference on Artificial Intelligence -Volume 3, AAAI &apos;08</title>
				<meeting>the 23rd National Conference on Artificial Intelligence -Volume 3, AAAI &apos;08</meeting>
		<imprint>
			<publisher>AAAI Press</publisher>
			<date type="published" when="2008">2014. 2016. 2015. 2008</date>
			<biblScope unit="page" from="1594" to="1597" />
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
	<note>Iterative learning of answer set programs from context dependent examples</note>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">The Stanford CoreNLP natural language processing toolkit</title>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">D</forename><surname>Manning</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Surdeanu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Bauer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Finkel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">J</forename><surname>Bethard</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Mcclosky</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Association for Computational Linguistics (ACL) System Demonstrations</title>
				<imprint>
			<date type="published" when="2014">2014</date>
			<biblScope unit="page" from="55" to="60" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<monogr>
		<title level="m" type="main">The Syntactic Process</title>
		<author>
			<persName><forename type="first">M</forename><surname>Steedman</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2000">2000</date>
			<publisher>MIT Press</publisher>
			<pubPlace>Cambridge, MA, USA</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<monogr>
		<author>
			<persName><forename type="first">J</forename><surname>Weston</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Bordes</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Chopra</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">M</forename><surname>Rush</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Van Merriënboer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Joulin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Mikolov</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1502.05698</idno>
		<title level="m">Towards AI-complete question answering: A set of prerequisite toy tasks</title>
				<imprint>
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
