<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Understanding Stories with Large-Scale Common Sense</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Bryan</forename><surname>Williams</surname></persName>
							<affiliation key="aff0">
								<orgName type="laboratory">Computer Science and Artificial Intelligence Laboratory</orgName>
								<orgName type="institution">Massachusetts Institute of Technology</orgName>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Henry</forename><surname>Lieberman</surname></persName>
							<affiliation key="aff0">
								<orgName type="laboratory">Computer Science and Artificial Intelligence Laboratory</orgName>
								<orgName type="institution">Massachusetts Institute of Technology</orgName>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Patrick</forename><surname>Winston</surname></persName>
							<affiliation key="aff0">
								<orgName type="laboratory">Computer Science and Artificial Intelligence Laboratory</orgName>
								<orgName type="institution">Massachusetts Institute of Technology</orgName>
							</affiliation>
						</author>
						<title level="a" type="main">Understanding Stories with Large-Scale Common Sense</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">781510925D920F99801D900529274504</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-23T20:59+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Story understanding systems need to be able to perform commonsense reasoning, specifically regarding characters' goals and their associated actions. Some efforts have been made to form large-scale commonsense knowledge bases, but integrating that knowledge into story understanding systems remains a challenge. We have implemented the Aspire system, an application of large-scale commonsense knowledge to story understanding. Aspire extends Genesis, a rule-based story understanding system, with tens of thousands of goalrelated assertions from the commonsense semantic network ConceptNet. Aspire uses ConceptNet's knowledge to infer plausible implicit character goals and story causal connections at a scale unprecedented in the space of story understanding. Genesis's rule-based inference enables precise story analysis, while ConceptNet's relatively inexact but widely applicable knowledge provides a significant breadth of coverage difficult to achieve solely using rules. Genesis uses Aspire's inferences to answer questions about stories, and these answers were found to be plausible in a small study. Though we focus on Genesis and ConceptNet, demonstrating the value of supplementing precise reasoning systems with large-scale, scruffy commonsense knowledge is our primary contribution.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Introduction</head><p>Because story understanding is essential to human intelligence, modeling human story understanding is a longstanding goal of artificial intelligence. Here, we use the term "story" to refer to any related sequence of events; a traditional narrative structure is not necessary. Many story understanding systems rely on rules to express and manipulate common sense, such as the rule "If person X harms person Y, person Y may become angry." However, the amount of commonsense knowledge humans possess is vast, and manually expressing significant amounts of common sense using rules is tedious.</p><p>Instead, story-understanding systems should be able to make the commonsense connections and inferences on their own, and rule authors should focus on the specifics of a particular domain or story. Commonsense knowledge bases can help achieve this vision. Commonsense knowledge bases express domain-independent and story-independent knowledge, and this knowledge can complement explicit rules by "filling in the gaps." Our work explores the issues of how and when to use each kind of representation and reasoning.</p><p>Genesis is a story-understanding system which models aspects of human story understanding by reading and analyzing stories written in simple English. Genesis can demonstrate its understanding in numerous ways, including question answering, story summarization, and hypothetical reasoning. Prior to this work, Genesis required rule authors to explicitly construct rules that codify all the knowledge necessary for identifying causal connections between story events. For broad-coverage story understanding, explicit construction of all necessary rules is not feasible. Concept-Net <ref type="bibr" target="#b3">(Havasi et al. 2009</ref>), a large knowledge base of common sense, helps lessen the burden placed on rule authors. Much of ConceptNet's knowledge has been crowdsourced. ConceptNet includes more than a million assertions, but in this work we draw from 20,000 concepts connected in 225,000 assertions. We have implemented the Aspire system, a new Genesis module which uses ConceptNet's goalrelated knowledge to infer implicit explanations for story events, bettering understanding and analysis.</p><p>Inferring nontrivial implicit events at a large scale is a new capability within the field of story understanding. Genesis provides a human-readable justification for every ConceptNet-assisted inference it makes, showing the user exactly what pieces of commonsense knowledge it believes are relevant to the story situation. Currently, Genesis is just incorporating ConceptNet's goal-related knowledge, approximately 12,000 assertions in total, into its story processing to demonstrate the viability of this approach. However, we've established a general connection between the two systems so future work can incorporate additional kinds of Con-ceptNet knowledge. Despite ConceptNet's admitted imprecision and spotty coverage, we've found it simplifies the process of identifying and applying relevant knowledge at a large scale. However, without judicious use, this imprecision can result in faulty story analyses. Therefore, we developed guiding heuristics which enable Genesis to take advantage of Con-ceptNet's loosely structured commonsense knowledge in a careful manner. We focus on Genesis and ConceptNet, but we believe the significance of the large-scale application of commonsense knowledge to story understanding we've achieved extends beyond these two systems. ConceptNet is a large semantic network of common sense developed by the Open Mind Common Sense (OMCS) project at MIT <ref type="bibr" target="#b3">(Havasi et al. 2009)</ref>. There are several versions of ConceptNet, but we used ConceptNet 4 in our work. The latest version of ConceptNet as of this writing is Con-ceptNet 5, publicly available at http://conceptnet.io/. Con-ceptNet 5 is a larger collection that differs from version 4 primarily by containing "factual" knowledge from WikiDB and other sources in addition to the "pure commonsense"statements like "water is wet"-expressed in previous versions. In this paper, we use "ConceptNet 4" when discussing attributes of the system specific to that version, and "Con-ceptNet" at all other times.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Background ConceptNet</head><p>ConceptNet represents its knowledge using concepts and relations between them. Concepts can be noun or verb phrases such as "computer", "breathe fresh air" or "beanbag chair." There are 27 relations in ConceptNet 4, including "Is A," "Used For," "Causes," "Desires," "At Location," and a catchall relation "Has Property." ConceptNet expresses its knowledge using assertions, each of which consists of a left concept, a relation, and a right concept. For instance, the ConceptNet assertion "computer At Location office" represents the fact that computers are commonly found in offices. Every assertion is associated with a score which represents the confidence in the truth of that assertion. Knowledge reinforced by more sources is assigned a higher confidence score. The knowledge in ConceptNet 4 was mostly crowdsourced, collected from natural language statements and online games-with-a-purpose designed to improve the knowledge base.</p><p>ConceptNet assertions are not designed to be interpreted as first-order logic expressions. For instance, ConceptNet contains the assertion "sport At Location field." It's not always true that sports are found on a field, as plenty of sports are played indoors. However, this assertion is still a useful bit of common sense. ConceptNet also does not use strict logical inference. Logical reasoning would combine the reasonable assertions "volleyball Is A sport" and "sport At Location field" to form the conclusion "volleyball At Location field," which is rarely true because volleyball is usually played indoors or on a beach.</p><p>ConceptNet's knowledge is also not purely statistical. The score for "sport At Location field" is not determined by dividing the number of sports played on a field by the number of sports, as such an approach clumsily ignores context dependency. ConceptNet does apply some statistical machinery to form new conclusions from large numbers of assertions, but it uses human-supplied generalizations as input rather than pointwise data like occurrences of words in documents.</p><p>ConceptNet is intentionally "scruffy" in nature (Minsky 1991), embracing the chaotic nature of common sense. Assertions can be ambiguous, contradictory, or contextdependent, but the same is true of the commonsense knowledge humans possess, albeit to a different degree. Concept-Net's lack of precision does mean applications that use it must be careful in exactly how they apply its knowledge. We discuss our heuristic approach in "The Aspire System" section of this paper.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Genesis</head><p>Genesis is a story-understanding system started by Patrick Winston in 2008, and is implemented in Java. Stories are input to Genesis via simplified natural language sentences and parsed into story events by <ref type="bibr">Katz's START parser (1997)</ref>. We use the term story event to refer to story sentences, phrases, or inferences that Genesis extracts as a unit of computation.</p><p>Genesis can summarize stories, answer questions about them, perform hypothetical reasoning, align stories to identify analogous events, and analyze a story from the perspective of one of the characters, among many other capabilities <ref type="bibr" target="#b21">(Winston 2014;</ref><ref type="bibr" target="#b7">Holmes and Winston 2016</ref>; Noss 2016). To perform intelligent analyses of a story, Genesis relies heavily on making inferences and forming causal connections between the story's events. If these connections or inferences are of poor quality, the analyses suffer. Prior to this work, commonsense knowledge was obtained exclusively through rules, putting a large burden on the rule author. The same ruleset is likely useful for many stories, especially within the same genre, so the author may be able to reuse or modify previously constructed rulesets. Still, separating general, often banal commonsense knowledge from Genesis rules frees the rule author to concentrate on domain specifics and highlevel story patterns. ConceptNet is considerably larger than any previously constructed Genesis rule set.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>The Aspire System</head><p>Goals are essential to human nature. Humans are constantly forming goals and taking actions towards them, and Schank long ago recognized the importance of identifying character Figure <ref type="figure">2</ref>: The inference and equivalent rule formulation Aspire makes from the example explicit story events "Sean is gaining weight" and "Sean rides his bike to work." The inferred character goal "Sean wants to exercise" connects these two story events. goals when processing natural language <ref type="bibr">(1977)</ref>. Low-level goals can change the way the brain processes information, altering what people pay attention to and what people remember <ref type="bibr" target="#b9">(Kenrick et al. 2010)</ref>. At a more conscious level, goals are an effective motivational tool <ref type="bibr" target="#b12">(Locke and Latham 2006)</ref>. Because goals drive humans at both a subconscious and conscious level, goals are prevalent in the stories humans tell as well. Enabling Genesis to better understand goals, their causes, and actions taken towards completing them improves its comprehension of goal-oriented stories.</p><p>ConceptNet is a great resource for analyzing goals because it contains a multitude of commonsense knowledge about both what causes and fulfills goals. The relations "Causes Desire," "Motivated By Goal," "Used For," and "Has Subevent" are all particularly relevant to the goal domain. We have given Genesis the means to leverage this knowledge while processing a story to perform goal-related inference.</p><p>We introduced the Aspire system, a Genesis module which analyzes characters' goals, their causes, and the actions taken to complete them. Aspire uses approximately 12,000 goal-related assertions from ConceptNet, but can operate using explicitly authored rules as well. With Aspire and ConceptNet, Genesis can arrive at a more complete understanding of a story with less effort from the rule author.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Implementation</head><p>Aspire works by maintaining a set of candidate character goals as Genesis sequentially reads the story input, adding to the set when it detects that a story event might cause a character in the story to have a goal. Aspire analyzes each story event to see if it causes a candidate character goal. It also checks if each story event is a goal contribution, an action taken by a character to accomplish a previously identified candidate character goal. Candidate character goals are kept merely as candidates and do not affect other Genesis modules until Aspire sees a goal contribution for a candidate character goal, at which point the goal is "confirmed" and the inference is inserted into the story. Aspire also analyzes its own inferences for goal causation and contribution, allowing its inferences to build off one another. Note that we use the terms "candidate character goal" and "candidate goal" interchangeably.</p><p>As an example, consider a story about a man named Sean that contains the event "Sean is gaining weight." In this scenario, Aspire checks if this event causes a goal by examining the rules Genesis was given and by consulting ConceptNet. ConceptNet knows gaining weight causes the desire of exercising, so Aspire receives the assertion "gain weight Causes Desire exercise" from ConceptNet, among other relevant assertions. Aspire adds the corresponding candidate character goals to its set, including "Sean wants to exercise," and continues reading the story.</p><p>In addition to goal causation analysis, Aspire also performs goal contribution analysis on every received story event. During goal contribution analysis, Aspire checks if the current story event contributes to any of the candidate goals in its set. To detect a contribution between an event and candidate goal, Aspire first tries to match the event to the goal using traditional Genesis matching, which does not involve ConceptNet. If this match succeeds, Aspire concludes that the event contributes to the candidate character goal. If matching fails, ConceptNet is consulted.</p><p>Suppose the story contains some subsequent event "Sean rides his bike to work." During goal contribution analysis of this event, Genesis matching between bike riding and Sean's candidate goal of exercising fails. Therefore, Aspire extracts the concept "ride bike" from the event and "exercise" from the candidate character goal. Aspire then queries Concept-Net to see if any of "ride bike Motivated By Goal exercise," "ride bike Used For exercise," or "exercise Has Subevent ride bike" are true. If ConceptNet confirms any of these assertions, Aspire concludes the event contributes to the candidate character goal; otherwise, with both Genesis matching and ConceptNet failing to form a connection between the event and the goal, Aspire concludes the event does not contribute to the goal. In this goal contribution analysis, Con-ceptNet confirms "ride bike Motivated By Goal exercise," and Aspire links Sean's bike riding to his goal of exercising. While there is no guarantee that these two events are causally related, storytelling convention and commonsense reasoning indicate it's likely they are. Aspire is designed to generate plausible inferences, not indisputable ones.</p><p>When Aspire concludes that a story event contributes to a candidate character goal, Genesis forms the associated causal connections and adds new inferred events to the story. Consider once again the Sean example. Having analyzed "Sean is gaining weight" and "Sean rides his bike to work," Genesis adds "Sean wants to exercise" to the story. Genesis also forms causal connections between the inferred goal's cause, the inferred goal, and the goal contribution action. The completion of the goal ("Sean exercises") is instantiated and added to the story as well. Genesis explicitly adds the inferred goal and inferred goal completion to the story so that all modules, including Aspire, can process these inferences and draw additional conclusions. For example, Sean exercising could contribute to a candidate character goal "Sean wants to lose weight" which was formed earlier in the story. In this way, Aspire's inferences recursively enable additional Aspire inferences.</p><p>Importantly, the character goal and character goal completion are only inserted into the story once Aspire sees an action taken that contributes to the character goal. Candidate character goals on their own do not affect the story in any way-a character action must first prove a candidate goal credible before Genesis acts on Aspire's analysis. If Aspire never sees such an action, the candidate goal remains a potential interpretation of the story's events that lacks sufficient evidence. Aspire operates in this way because events that can cause a goal do not always cause that particular goal. In the Sean example, weight gain does not necessitate a desire to exercise. It's possible that Sean's weight gain caused him to diet instead of exercise, or maybe he didn't take any response at all. Aspire's defensive manner also helps prevent misapplication of ConceptNet's knowledge. ConceptNet's inexact nature allows its knowledge to be widely applicable, but its knowledge can also be ambiguous, context-dependent, or otherwise imprecise. Therefore, Aspire requires seeing both a goal's cause and a goal's effect before inferring the goal to help ensure it has formed a correct inference.</p><p>Extracting relevant ConceptNet concepts from a story event is not always straightforward. For example, in the sentence "Matt buys a book," it would be appropriate for Aspire to extract both "buy book" and "book" as concepts to see what goals these may cause or complete. However, if the event were instead "Matt loses his book," while it would still be appropriate to extract "lose book" as a concept, "book" by itself would not be appropriate because ConceptNet provides knowledge about possessing a book, not losing it.</p><p>As this example shows, verbs and verb phrases are simpler to extract than nouns and noun phrases because their meaning is less dependent on the rest of the sentence. Therefore, nearly all of the concepts Aspire extracts from story events are verbs or verb phrases rather than nouns or noun phrases. We developed heuristics which specify the behavior for concept extraction and matching based on the part of speech of the concept, the transitivity of the verb in a verb phrase concept, and whether a noun concept is a proper noun. These heuristics were devised by examining patterns in ConceptNet data. More specifically:</p><p>• Nouns or noun phrases are only extracted if the verb is a form of "be," "have," or "feel" (e.g. "compassion" is extracted from "Matt has compassion"). • Proper nouns never appear in extracted verb phrase concepts, as ConceptNet generally does not contain knowledge about specific people, places, or things. • When a verb takes an object that isn't a proper noun, verb phrases are extracted rather than sole verbs because the knowledge about the sole verb may not be relevant when the object is taken into account (e.g. "fight" vs. "fight inflation"). • Modifying adverbs can be included in extracted verb phrase concepts, allowing Aspire to query ConceptNet for "work hard" in addition to "work" upon reading "Matt works hard." • The transitivity of the verbs in the goal causation and goal contribution need not match. Aspire can detect that "Matt sings" contributes to "Matt wants to make music." • Aspire assumes that, in an assertion, the subject and optional object the left concept takes are consistent with the subject and optional object the right concept takes. For example, if Aspire reads that "Matt loves Helen" and learns from ConceptNet that "love Causes Desire kiss," Aspire adds "Matt wants to kiss Helen" as a candidate goal. It does not hypothesize that Helen wants to kiss Matt, Matt wants to kiss someone else, or someone else wants to kiss.</p><p>While these heuristics are not infallible, they help ensure Aspire is using data appropriate to the story in the correct manner. We describe the extraction and matching algorithms in more detail in our earlier work (Williams 2017).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>An Example Analysis</head><p>In this section we describe the performance of the Aspire system on an example story. Genesis justifies its conclusions with human-readable text that describes the relevant knowledge obtained from ConceptNet. We also give an example of how Aspire's capabilities allow other Genesis features to perform better.</p><p>The following is a simple retelling of the Mexican-American war, a conflict which occurred in the 1840s. Because of the limitations of the START parser, stories are input to Genesis using simple English. This summary of the war is plain, brief, and high-level, resembling an elementary school textbook chapter summary. This example focuses on ConceptNet, exploring how Genesis analyzes this story using Aspire when it is not given any rules from an author. There is one goal explicitly stated in the story-"The United States wants to gain land"-but Genesis is not told how this relates to any of the other story events.</p><p>A subset of the elaboration graph depicting Genesis's analysis of the Mexican-American war story is shown in Figure <ref type="figure" target="#fig_1">3</ref>. The elaboration graph is a central Genesis display which shows the inferences and causal connections Genesis has made. Each story event is shown in a box, and arrows between boxes point from cause to effect. Events and connections inferred using ConceptNet knowledge are shown using dotted lines.</p><p>Aspire allows Genesis to connect the United States' ambition and greed to its actions taken against Mexico. The inferences Genesis makes are just one possible interpretation of the story's events, and the English rendering of some of its inferences is not perfect. Still, the inferences are salient because the effective application of relevant knowledge is the focus of our work and this example. Our design strives for plausible inferences, not certain ones.</p><p>ConceptNet helps Genesis connect the United States' ambition and greed to its desire of conquering its opponent, which it completes when it triumphs by capturing Mexico City. Conquering its opponent allows it to gain land, a goal explicitly stated in the story. The country's greed makes it want to get something, and this goal is accomplished when the United States gains land, an example of an Aspire inference enabling Genesis to make an additional Aspire inference. There was no mention of conquering anywhere in the story text, but Genesis has inferred that this concept is relevant. Genesis was able to make all of these inferences and connections without any author-defined rules; instead, it's relying just on ConceptNet knowledge.</p><p>Aspire tracks the ConceptNet knowledge it uses while analyzing a story. Genesis uses this data provenance to display justifications for its analysis to the user, increasing transparency by showing exactly why Genesis believes a particular piece of ConceptNet knowledge is relevant to a situation. Some of the justifications for the Mexican-American Figure <ref type="figure">4</ref>: Two of the justifications Genesis produces when analyzing the Mexican-American war story. Genesis justifies every ConceptNet-assisted inference with the relevant assertion from ConceptNet, increasing transparency. The confidence score accompanies each assertion.</p><p>Figure <ref type="figure">5</ref>: A series of questions and Genesis answers regarding the Mexican war story. Genesis's grammar in its English generation is not perfect, but the content is salient.</p><p>war story are displayed in Figure <ref type="figure">4</ref>.</p><p>Causal connections are the base of the majority of Genesis's features because they form its reasoning substrate. The ConceptNet-assisted causal connections and inferences Aspire forms improve the performance of many Genesis features even though the rule author has given Genesis less information. Question answering is an example of such a feature. Two questions about the Mexican-American war story and Genesis's associated answers are displayed in Figure <ref type="figure">5</ref>.</p><p>When Genesis is asked "How did the United States gain land?", it references two story events that were never explicitly stated, albeit with imperfect grammar. When Genesis is asked about one of these inferences, it responds with further explanation, citing specific explicit story events. If Aspire were absent, Genesis would be unable to answer "Why" or "How" questions about this story as no rules were authored for it to use.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Evaluation</head><p>Is the Aspire system capable of producing plausible, humanlike inferences? We conducted a small study to compare Aspire's analyses with human reading comprehension. Evaluating Aspire for accuracy on a large corpus of stories in unrestricted natural language was not feasible due to limitations of the START parser and ConceptNet's limited cov-erage. Given these terms, rather than test Aspire's comprehensive story understanding ability, our goal was to evaluate whether participants considered Aspire's answers to simple reading comprehension questions plausible. The results were promising. Participants' answers to the questions were roughly compatible with Aspire's, and they found Aspire's answers plausible.</p><p>Five male participants took part in the experiment. The participants were all in their early 20s, were racially diverse, and had varying academic backgrounds. We presented two example stories, one fiction and one nonfiction. The nonfiction story replaced real-world proper names with fictional names to avoid participants relying on their prior knowledge of historical events rather than what was explicitly mentioned in the text.</p><p>First, the participant read each story and answered the questions. Then, they were shown Aspire's answers and rated them for plausibility on a Likert-5 scale. Each participant was asked five questions about each story. Participants were also asked up to three follow-up questions depending on their responses to the original five. The participants then rated thirteen of Aspire's responses. Because we were testing Aspire's application of ConceptNet knowledge, all questions focused on ConceptNet-assisted inferences. We removed any grammatical errors and oddities from Aspire's responses so that these would not distract the participant.</p><p>The results from the evaluation indicate that the participants found Aspire's inferences largely plausible. The average rating given by participants was 4.77 on a scale of 5, suggesting that participants do tend to agree with Aspire's answers to the questions.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Discussion and Future Work</head><p>Much of ConceptNet's knowledge remains untapped, as Aspire only uses the "Causes Desire," "Motivated By Goal," "Used For," and "Has Subevent" relations. There are numerous other relations that would provide valuable information to Genesis, with "Causes" being the most enticing one. Causal knowledge could directly wire into Genesis's story processing so Genesis could identify general causal relationships in the story, moving beyond the goal-oriented connections Aspire forms.</p><p>Aspire operates using an important simplifying assumption which does not always hold: if a goal has had one or more actions taken towards completing it, the goal has been completed. As an example, suppose Aspire has formed the candidate character goal "Matt wants to dance." Aspire would connect the event "Matt plays music" to this goal because ConceptNet contains the knowledge "play music Motivated By Goal dance." Because the goal has had an action taken towards it, Aspire would instantiate both the inferred goal "Matt wants to dance" and the inferred goal completion "Matt dances." Instantiating the goal completion at this point is premature, though, and is incorrect if the story later indicates that Matt did not get to dance after all. We also do not consider more complex configurations where there are multiple interlocking or inhibiting goal structures.</p><p>It would be much better if, for any given goal, Aspire could distinguish which sorts of actions complete that goal from which sorts of actions merely contribute to it. Differentiating goal contribution from goal completion is just one example of the more robust understanding we'd like Aspire to reach. Ideas from Schank's Conceptual Dependency Theory could be useful in achieving deeper understanding, especially given recent work exploring how to crowdsource the requisite knowledge <ref type="bibr" target="#b18">(Schank 1975;</ref><ref type="bibr" target="#b13">Macbeth and Grandic 2017)</ref>. Adapting approaches and principles from Segmented Discourse Representation Theory (SDRT) could also prove beneficial <ref type="bibr" target="#b11">(Lascarides and Asher 2008)</ref>.</p><p>Alternative sources of commonsense such as Cyc and Webchild <ref type="bibr" target="#b2">(Guha and Lenat 1994;</ref><ref type="bibr" target="#b19">Tandon et al. 2014</ref>) may go beyond ConceptNet in providing more finely grained knowledge for story understanding. They could also assist in processing polysemy, homonymy, and the like. ConceptNet's simple design chooses to not capture these language intricacies, although they are addressed by some later versions of ConceptNet <ref type="bibr" target="#b4">(Havasi, Speer, and Pustejovsky 2010;</ref><ref type="bibr" target="#b1">Chen and Liu 2011)</ref>. Multiple commonsense knowledge bases could even be used in conjunction. We are not bound to Genesis either; other rule-based and logical systems certainly warrant investigation. Our broader focus is not on Genesis, ConceptNet, or Aspire, but instead on supplementing precise reasoning systems with large-scale, scruffy commonsense knowledge. This valuable combination results in both broad coverage and a significant depth of understanding.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Related Work</head><p>While many have applied commonsense knowledge to story understanding, we are not aware of any story understanding system that incorporates common sense at as large a scale as this work. Prior work in this area tends towards the creation of a small amount of handcrafted commonsense rules rather than harnessing a substantial amount of common sense for use in many different contexts. Note that the work described in this section is far from a complete survey; instead, we focus on several closely related efforts and trends.</p><p>Gordon (2016) recently framed the problem of forming commonsense interpretations as a process of logical abduction, building on the ideas of <ref type="bibr" target="#b6">Hobbs, Stickel, Martin, and Edwards (1988)</ref>. He developed a model which reads a small story and chooses the more likely of two possible relevant story inferences. The model consults 136 hand-coded probabilistic commonsense axioms to generate scored hypotheses. The model performed well on the Triangle-COPA benchmark set, but it's not clear how scalable the approach is. The model requires carefully constructed commonsense axioms, and Gordon does not propose a way to generate these axioms at scale. <ref type="bibr" target="#b0">Blass and Forbus (2017)</ref> have taken a larger scale approach to commonsense causal reasoning through their analogical chaining formalism. Their reasoning system is initialized with a Cyc ontology but can also take natural language instruction. It answers questions from the COPA dataset. While they focus only on selecting plausible consequents of events, not general story understanding, their approach shares similarities with ours.</p><p>The field of machine reading, focused on the application of machine learning techniques to natural language understanding, is rapidly developing. Notable topics of interest include semantic role labeling, named-entity recognition, and question answering, the area most closely related to this work <ref type="bibr" target="#b22">(Young et al. 2017)</ref>. Popular benchmark question answering datasets from Stanford, Facebook, and Google DeepMind <ref type="bibr" target="#b16">(Rajpurkar et al. 2016;</ref><ref type="bibr" target="#b20">Weston et al. 2015;</ref><ref type="bibr" target="#b5">Hermann et al. 2015)</ref> are routinely used to evaluate current models. However, the answer for every question in these datasets is always explicitly stated in the input text (besides when the question has a yes or no answer, in which case the story explicitly contains all relevant information needed to answer the question). Little inference and commonsense knowledge is required, as all the questions can be answered by combining information in different sections of the input text. These datasets are a good first step towards more general intelligence, but natural language understanding systems must be able to form inferences using background knowledge. The recently introduced RACE dataset <ref type="bibr" target="#b10">(Lai et al. 2017</ref>) requires significantly more commonsense reasoning in its question answering tasks than prior benchmarks, but has yet to gain popularity. The Story Cloze test, in which a system is tasked with reading a four-sentence story and choosing the more plausible ending from two choices, is another popular means of evaluation <ref type="bibr" target="#b15">(Mostafazadeh et al. 2017)</ref>. Importantly, all of these datasets and evaluation schemes test the ability of a system to select a plausible inference from a set of choices, not generate its own. Aspire's capacity to generate inferences distinguishes it from popular natural language understanding work, but also complicates evaluation.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Conclusion</head><p>Aspire brings us closer to the goal of applying Genesis to a corpus of naturally occurring text. By consulting Con-ceptNet, a large commonsense knowledge base composed of human-submitted assertions, Genesis can make many more inferences and causal connections. Rule authors can now focus significantly more on domain-specific rules describing niche information, letting ConceptNet fill in gaps with universal common sense. This technology is quite valuable in any domain that heavily depends on precedent, including law, medicine and business.</p><p>We found that ConceptNet's relaxed data format simplified the process of forming plausible inferences, but also required developing heuristics to help prevent imprecision from resulting in incorrect inferences. Encouraged by these initial results, we look forward to incorporating additional large-scale common sense into Genesis processing. While the union of ConceptNet and Genesis has proven effective, our primary contribution is the general approach taken rather than the specifics. Several other rule-based systems, commonsense knowledge bases, and inference schemes merit exploration as well.</p><p>Many AI systems face the dilemma of how to simultaneously achieve breadth and depth. They desire the ability to handle a wide range of subject matter, but also strive to perform analysis at a meaningful and appropriately complex level. Trying to squeeze all these capabilities into a single representation and a single inference procedure might be a fool's errand. Instead, we are inspired by Minsky's recommendation of having multiple representations and multiple inference procedures <ref type="bibr">(1986)</ref>. Our work explores how to get them to work together, letting each do what they do best.</p><p>In the case of story understanding, comprehending plot structure, goals, and plans requires precision. Rule-based systems are appropriate for such analysis. When connecting these story elements with the details of concrete actions and specific situations, such precision isn't needed, but broad coverage of commonsense knowledge becomes crucial. Aspire shows how rule-based systems and broadcoverage commonsense knowledge bases can work together. Being a jack-of-all-trades doesn't mean you have to be master of none.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: A visualization of a very small part of Con-ceptNet's knowledge. Concepts are shown in circles and relations are shown as directed arrows between concepts. © 2009 IEEE. Reprinted, with permission, from "Digital Intuition: Applying Common Sense Using Dimensionality Reduction," by C. Havasi, R. Speer, J. Pustejovsky, &amp; H. Lieberman, 2009, IEEE Intelligent Systems, 24, p. 4.</figDesc><graphic coords="2,54.00,102.85,238.50,146.03" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 3 :</head><label>3</label><figDesc>Figure 3: A subset of the elaboration graph produced by Genesis for the Mexican-American war story. All events and connections inferred with ConceptNet knowledge are shown using dotted lines. Genesis was not given any author-defined rules to help analyze this story, instead relying entirely on ConceptNet.</figDesc><graphic coords="5,54.00,54.00,238.47,100.53" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0"><head></head><label></label><figDesc></figDesc><graphic coords="3,54.00,54.00,504.00,138.96" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head></head><label></label><figDesc>The United States is a country. Mexico is a country. The year is 1846. Manifest Destiny is popular in the United States. The United States has ambition, has greed, and wants to gain land. Mexico and the United States disagree over borders. The United States moves into the disputed territory. Mexican forces attack the United States. The United States declares war on Mexico. Winfield Scott leads the United States army. The United States battles Mexico. The United States triumphs by capturing Mexico City. The United States defeats Mexico and wins the war. The Treaty of Guadalupe Hidalgo officially ends the war.</figDesc><table /></figure>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Acknowledgements</head><p>This research was supported, in part, by the Air Force Office of Scientific Research, Award Number FA9550-17-1-0081.</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Analogical Chaining with Natural Language Instruction for Commonsense Reasoning</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">A</forename><surname>Blass</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">D</forename><surname>Forbus</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">AAAI</title>
				<imprint>
			<date type="published" when="2017">2017</date>
			<biblScope unit="page" from="4357" to="4363" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Combining ConceptNet and WordNet for Word Sense Disambiguation</title>
		<author>
			<persName><forename type="first">J</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Liu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Commonsense Interpretation of Triangle Behavior</title>
				<editor>
			<persName><forename type="first">A</forename><forename type="middle">S</forename><surname>Gordon</surname></persName>
		</editor>
		<imprint>
			<publisher>AAAI</publisher>
			<date type="published" when="2011">2011. 2016</date>
			<biblScope unit="page" from="3719" to="3725" />
		</imprint>
	</monogr>
	<note>IJCNLP</note>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Enabling agents to work together</title>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">V</forename><surname>Guha</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">B</forename><surname>Lenat</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Commun. ACM</title>
		<imprint>
			<biblScope unit="volume">37</biblScope>
			<biblScope unit="issue">7</biblScope>
			<biblScope unit="page" from="126" to="142" />
			<date type="published" when="1994">1994</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Digital Intuition: Applying Common Sense Using Dimensionality Reduction</title>
		<author>
			<persName><forename type="first">C</forename><surname>Havasi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Speer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Pustejovsky</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Lieberman</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Intelligent systems</title>
		<imprint>
			<biblScope unit="volume">24</biblScope>
			<biblScope unit="issue">4</biblScope>
			<date type="published" when="2009">2009</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Coarse Word-Sense Disambiguation Using Common Sense</title>
		<author>
			<persName><forename type="first">C</forename><surname>Havasi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Speer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Pustejovsky</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">AAAI Fall Symposium: Commonsense Knowledge</title>
				<imprint>
			<date type="published" when="2010">2010</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Teaching Machines to Read and Comprehend</title>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">M</forename><surname>Hermann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Kocisky</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Grefenstette</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Espeholt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Kay</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Suleyman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Blunsom</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Advances in Neural Information Processing Systems 28</title>
				<editor>
			<persName><forename type="first">C</forename><surname>Cortes</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">N</forename><forename type="middle">D</forename><surname>Lawrence</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">D</forename><forename type="middle">D</forename><surname>Lee</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">M</forename><surname>Sugiyama</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">R</forename><surname>Garnett</surname></persName>
		</editor>
		<imprint>
			<publisher>Curran Associates, Inc</publisher>
			<date type="published" when="2015">2015</date>
			<biblScope unit="page" from="1693" to="1701" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Interpretation as Abduction</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">R</forename><surname>Hobbs</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Stickel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Martin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Edwards</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 26th Annual Meeting on Association for Computational Linguistics</title>
				<meeting>the 26th Annual Meeting on Association for Computational Linguistics</meeting>
		<imprint>
			<publisher>Association for Computational Linguistics</publisher>
			<date type="published" when="1988">1988</date>
			<biblScope unit="page" from="95" to="103" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Story-enabled hypothetical reasoning</title>
		<author>
			<persName><forename type="first">D</forename><surname>Holmes</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Winston</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Advances in Cognitive Systems</title>
				<imprint>
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Annotating the World Wide Web using Natural Language</title>
		<author>
			<persName><forename type="first">B</forename><surname>Katz</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 5th RIAO Conference on Computer Assisted Information Searching on the Internet</title>
				<meeting>the 5th RIAO Conference on Computer Assisted Information Searching on the Internet</meeting>
		<imprint>
			<date type="published" when="1997">1997</date>
			<biblScope unit="page" from="136" to="155" />
		</imprint>
	</monogr>
	<note>Computer-Assisted Information Searching on Internet</note>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Goal-driven Cognition and Functional Behavior: The Fundamental-Motives Framework</title>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">T</forename><surname>Kenrick</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">L</forename><surname>Neuberg</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Griskevicius</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">V</forename><surname>Becker</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Schaller</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Current Directions in Psychological Science</title>
		<imprint>
			<biblScope unit="volume">19</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="63" to="67" />
			<date type="published" when="2010">2010</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<monogr>
		<title level="m" type="main">RACE: Large-scale ReAding Comprehension Dataset From Examinations</title>
		<author>
			<persName><forename type="first">G</forename><surname>Lai</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Q</forename><surname>Xie</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Yang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Hovy</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
	<note type="report_type">ArXiv e-prints</note>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Segmented Discourse Representation Theory: Dynamic Semantics with Discourse Structure</title>
		<author>
			<persName><forename type="first">A</forename><surname>Lascarides</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Asher</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Computing Meaning. Springer</title>
				<imprint>
			<date type="published" when="2008">2008</date>
			<biblScope unit="page" from="87" to="124" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">New Directions in Goal-Setting Theory</title>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">A</forename><surname>Locke</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">P</forename><surname>Latham</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Current Directions in Psychological Science</title>
		<imprint>
			<biblScope unit="volume">15</biblScope>
			<biblScope unit="issue">5</biblScope>
			<biblScope unit="page" from="265" to="268" />
			<date type="published" when="2006">2006</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Crowdsourcing a Parallel Corpus for Conceptual Analysis of Natural Language</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">C</forename><surname>Macbeth</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Grandic</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of The Fifth AAAI Conference on Human Computation and Crowdsourcing. The Association for the Advancement of Artificial Intelligence</title>
				<meeting>The Fifth AAAI Conference on Human Computation and Crowdsourcing. The Association for the Advancement of Artificial Intelligence</meeting>
		<imprint>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Logical Versus Analogical or Symbolic Versus Connectionist or Neat Versus Scruffy</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">L</forename><surname>Minsky</surname></persName>
		</author>
		<author>
			<persName><surname>Simon</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">L</forename><surname>Schuster Paperbacks ; Minsky</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">AI magazine</title>
		<imprint>
			<biblScope unit="volume">12</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page">34</biblScope>
			<date type="published" when="1986">1986. 1991</date>
		</imprint>
	</monogr>
	<note>The Society of Mind</note>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">LSDSem 2017 Shared Task: The Story Cloze Test</title>
		<author>
			<persName><forename type="first">N</forename><surname>Mostafazadeh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Roth</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Louis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Chambers</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">F</forename><surname>Allen</surname></persName>
		</author>
		<ptr target="http://groups.csail.mit.edu/genesis/papers/2017%20Jessica%20Noss.pdf" />
	</analytic>
	<monogr>
		<title level="m">Who Knows What? Perspective-Enabled Story Understanding</title>
				<editor>
			<persName><forename type="first">J</forename><forename type="middle">M</forename><surname>Noss</surname></persName>
		</editor>
		<meeting><address><addrLine>LSDSem; Cambridge, MA</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2016">2017. 2017. 2016</date>
		</imprint>
		<respStmt>
			<orgName>Massachusetts Institute of Technology</orgName>
		</respStmt>
	</monogr>
	<note type="report_type">Master&apos;s thesis</note>
</biblStruct>

<biblStruct xml:id="b16">
	<monogr>
		<author>
			<persName><forename type="first">P</forename><surname>Rajpurkar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Lopyrev</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Liang</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1606.05250</idno>
		<title level="m">SQuAD: 100,000+ Questions for Machine Comprehension of Text</title>
				<imprint>
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">A Goal-directed Production System for Story Understanding</title>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">C</forename><surname>Schank</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Wilensky</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">SIGART Bull</title>
		<imprint>
			<biblScope unit="volume">63</biblScope>
			<biblScope unit="page" from="72" to="72" />
			<date type="published" when="1977">1977</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<monogr>
		<title level="m" type="main">Conceptual Information Processing</title>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">C</forename><surname>Schank</surname></persName>
		</author>
		<imprint>
			<date type="published" when="1975">1975</date>
			<publisher>Elsevier</publisher>
			<pubPlace>New York, NY</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">WebChild: Harvesting and Organizing Commonsense Knowledge from the Web</title>
		<author>
			<persName><forename type="first">N</forename><surname>Tandon</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>De Melo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Suchanek</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Weikum</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 7th ACM International Conference on Web Search and Data Mining</title>
				<meeting>the 7th ACM International Conference on Web Search and Data Mining</meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2014">2014</date>
			<biblScope unit="page" from="523" to="532" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<monogr>
		<author>
			<persName><forename type="first">J</forename><surname>Weston</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Bordes</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Chopra</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">M</forename><surname>Rush</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Van Merriënboer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Joulin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Mikolov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Williams</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1502.05698</idno>
		<ptr target="http://groups.csail.mit.edu/genesis/papers/2017%20Bryan%20Williams.pdf" />
		<title level="m">Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks</title>
				<meeting><address><addrLine>Cambridge, MA</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2015">2015. 2017</date>
		</imprint>
		<respStmt>
			<orgName>Massachusetts Institute of Technology</orgName>
		</respStmt>
	</monogr>
	<note type="report_type">Master&apos;s thesis</note>
	<note>A Commonsense Approach to Story Understanding</note>
</biblStruct>

<biblStruct xml:id="b21">
	<analytic>
		<title level="a" type="main">The Genesis Story Understanding and Story Telling System: a 21st Century Step Toward Artificial Intelligence</title>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">H</forename><surname>Winston</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Center for Brains, Minds and Machines</title>
				<imprint>
			<publisher>CBMM</publisher>
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
	<note type="report_type">Technical report</note>
</biblStruct>

<biblStruct xml:id="b22">
	<monogr>
		<title level="m" type="main">Recent Trends in Deep Learning Based Natural Language Processing</title>
		<author>
			<persName><forename type="first">T</forename><surname>Young</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Hazarika</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Poria</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Cambria</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1708.02709</idno>
		<imprint>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
