<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Knowledge-Based Translation of Natural Language into Symbolic Form</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Christodoulos Ioannou</string-name>
          <email>christodoulos.ioannou@st.ouc.ac.cy</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Loizos Michael</string-name>
          <email>loizos@ouc.ac.cy</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>CYENS Center of Excellence</institution>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Open University of Cyprus</institution>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2011</year>
      </pub-date>
      <fpage>266</fpage>
      <lpage>280</lpage>
      <abstract>
        <p>We consider the scenario of machines that receive human advice in natural language to revise their object-level knowledge for a domain of interest. Although techniques exist to translate such natural language advice into a symbolic form that is appropriate for machine reasoning, the translation process itself is typically pre-programmed and, thus, it is not amenable to dynamic and gradual improvement, nor can it be adjusted to the linguistic particularities of the advice giver. We seek to examine whether these limitations can be overcome through the use of a knowledge-based translation process. In this position paper we take a first step in this investigation by demonstrating how such meta-level translation knowledge can be engineered to support the interpretation of object-level advice. While, admittedly, our engineering of the meta-level knowledge amounts to pre-programming, it nonetheless pushes towards the automated acquisition of this meta-level knowledge through advice-taking by demonstrating a key prerequisite: that it can be expressed, and reasoned with, under the same syntax and semantics as the object-level knowledge.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>An oft-quoted lesson from the early days of Artificial
Intelligence is the importance of endowing machines with
commonsense knowledge [McCarthy, 1989]. Equally oft-quoted
is the realization that such commonsense knowledge cannot
be directly pre-programmed into a machine, but can, more
realistically, be acquired through some form of learning or,
more specifically, through some form of advice-taking from
a human [McCarthy, 1959]. But, most of the times, the
requirement for pre-programming machines has not been
completely side-stepped, but rather pushed up a level. Instead
of pre-programming the object-level (domain) knowledge of
machines, the majority of relevant AI research now
effectively deals with the task of pre-programming machines at
a meta-level (in the design of learning algorithms or
architectures) that instructs machines how to acquire the
objectlevel knowledge [Valiant, 2006; Michael and Valiant, 2008;
Michael, 2016; Sap et al., 2019]. Hence, one may ask: (Q1)</p>
    </sec>
    <sec id="sec-2">
      <title>Can this meta-level knowledge also be acquired by means analogous to how the object-level knowledge is acquired?</title>
      <p>In the case of interest to our work, the meta-level
knowledge represents a translation process (of object-level
knowledge) from natural language into symbolic form. One could
argue that question (Q1) is inconsequential in this case, since
the meta-knowledge needed for translation purposes is
predetermined and fixed, and comprises mostly generic rules
applicable across advice givers. We agree only partly! Groups
of people in different parts of the world or different
individuals may use the same language differently. Therefore, being
able to acquire meta-level knowledge by means analogous to
how the object-level knowledge is acquired would allow the
former knowledge to be treated as a policy of interpreting
verbal commands. Individuals would be able to expand the
policy to support interpreting additional sentence types, and
customize it to incorporate omitted but implied meanings or
different contexts when interpreting known sentence types.</p>
      <p>Consequently, our sought goal is to design a process that
will allow users to explain how to translate a sentence. For
this reason, even though the translation process of natural
language is user-agnostic in its most part, the amendment of this
process should be supported. A translation process providing
a dynamic and cognitively-light amendment procedure would
also help decrease the needed amount of cognitively-heavy,
batch-mode, offline pre-programming of meta-knowledge,
making an affirmative answer to question (Q1) important.</p>
      <p>The line of research that we pursue, and the initial results of
which we present in this position paper, seek to provide some
kind of response to (Q1), in the particular setting of
building a cognitive assistant able to help a human with a certain
task [Leviathan and Matias, 2018]. Such a cognitive
assistant cannot be pre-programmed, as it is expected to abide by
its user’s preferences when taking actions, which effectively
necessitates the assistant to acquire its user-specific
(objectlevel) knowledge through learning or advice-taking from the
user [Bernard and Arnold, 2019]. For concreteness, in this
paper we focus on knowledge acquisition through a process
of advice-taking as originally proposed by McCarthy [1959]
and further elaborated recently [Michael, 2017; MacLellan
et al., 2018] towards the development of the human-machine
interaction protocol of Machine Coaching [Michael, 2019].</p>
      <p>According to Machine Coaching, then, a cognitive
assistant observes the state of the world and reasons using its
current object-level knowledge on how to act. A human coach
observes the assistant and can inquire as to the reasons behind
the assistant’s choices. If the explanation is unconvincing or
otherwise insufficient or unacceptable to the human, then the
latter can offer advice back to the assistant on how to improve
its knowledge towards better future decision-making.
Importantly, we focus on the case where the interaction happens in
natural language, and we are interested, more specifically, in
understanding how the natural language advice offered by the
human to the assistant can be translated into some symbolic
form that can be used for reasoning by the assistant.</p>
      <p>Translating natural language into symbolic form is,
admittedly, a well-studied problem, and numerous relevant parsing
tools are available [Kamath and Das, 2019]. To our
knowledge, systems built upon such parsing tools are effectively
pre-programming the translation (meta-level) knowledge of
the cognitive assistant on how to interpret the (object-level)
advice from a human. Our aim, then, is to examine whether
an alternative to pre-programming is feasible and effective.
Can this meta-level knowledge be learned? Or more
specifically: (Q2) Can meta-level knowledge (of how to interpret
object-level advice) be acquired through Machine Coaching?</p>
      <p>Question (Q2) is our main long-term research question. We
take a first step towards answering this question by
demonstrating how the meta-level knowledge can be engineered to
achieve its goal of interpreting object-level advice. Although
engineering the meta-level knowledge does, in fact, amount
to pre-programming, it nonetheless pushes towards the
automated acquisition of this meta-level knowledge through
Machine Coaching by demonstrating the key prerequisite that it
can be expressed, and reasoned with, under the same syntax
and semantics as the object-level knowledge, which itself is
amenable to acquisition through Machine Coaching.
2</p>
      <sec id="sec-2-1">
        <title>A Machine Coaching Example</title>
        <p>As our running example, we consider a Cognitive Call
Assistant (CCA) installed on a smart phone that takes actions upon
receiving incoming calls, with an architecture as in Figure 1.</p>
        <p>Upon receiving an incoming call, the CCA uses its current
(object-level) knowledge to decide on how to act. The user
can interact with the CCA to help it improve its knowledge, so
that future incoming call events can be handled more
appropriately (according to the user’s personal preferences). The
interaction may have the form of verbal advice, handled by
the Verbal Coaching Interface, or non-verbal commands
handled by the Non-Verbal Coaching Interface (which includes
applications that any standard smart phone has, like
calendars, maps, etc.). The dialogue below demonstrates this:
1. Event: Incoming call from some caller.
2. User: Declines. “Decline calls when busy.”
3. CCA: “When do you consider yourself to be busy?”
4. User: Uses calendar and map applications and perhaps
verbal commands to instruct that she considers herself
busy when being in a meeting at work.
5. Event: (Later in the week. . . ) Incoming call from John
while user is in a work meeting.
6. CCA: Declines the call.</p>
        <sec id="sec-2-1-1">
          <title>7. User: “Why did you decline this call?”</title>
          <p>8. CCA: “Because you are at work and at a meeting, so I
conclude that you are busy.”
9. User: “Send SMS saying ‘Busy! Will call back later.’ to
the caller when the call is important. If John is the caller
then the call is important.”</p>
          <p>The example above is an instance of a Machine Coaching
interaction used to improve the object-level (domain)
knowledge of the CCA. The Verbal Coaching Interface is
responsible for the translation of natural language into symbolic form.
We consider this module’s policy to be implemented through
meta-level knowledge expressed and reasoned with under the
same syntax and semantics as the object-level knowledge,
and, thus, amenable to the same means of acquisition.</p>
          <p>We assume that the Verbal Coaching Interface ships with
an initial version of the meta-level (translation) knowledge.
Compared to the CCA’s object-level (domain) knowledge, the
training of which is primarily the responsibility of the user,
the initial version of the meta-level knowledge is engineered
by experts to support some minimal level of translation. If
the end user is unsatisfied with the translation process (i.e., if
the translated symbolic form of a natural language sentence is
wrong or unexpected, or a natural language feature is
unsupported), the end user should be able to advise and revise the
meta-level knowledge as done for the object-level knowledge.</p>
          <p>Apparently, the initial version of the meta-level knowledge
might not be rich enough to allow the user to explain and
correct all errors or handle all unknown advice sentences. Since
we do not expect the user to be fluent in the symbolic form
or to be a natural language expert, she must be able to ask an
expert for help, who will advise, in her stead, the translator
module using logic-based communication and the Machine
Coaching protocol to improve the translation process and/or
adjust it to her needs. We illustrate this type of interaction by
extending the conversation of our running example.</p>
          <p>Suppose that the Verbal Coaching Interface fails to
translate the last advice sentence from the user to the CCA during
their last interaction from above. The dialogue continues:
10. CCA: “I am unable to interpret the advice ‘If John is the
caller then the call is important’? Do you want to consult
an expert to help improve the translation policy?”
11. User: “Yes, please!”
12. CCA: “Please provide additional information that you
wish to pass on to the expert.”
13. User: “By this sentence I mean that I wish all calls
coming from John to be labeled as important. This, in turn,
will trigger a different treatment by the CCA.”
14. Event: The advice that could not be interpreted, along
with the additional information provided by the user are
sent to an expert. The expert machine coaches the CCA
to improve its meta-level knowledge.
15. CCA: “Expert advice has been received and the
translation policy has been amended. Please continue!”</p>
          <p>Thus, using Machine Coaching (albeit communicating
directly in the symbolic form of the meta-knowledge), the
expert can dynamically and gradually incorporate additional
features of natural language (e.g., word synonyms, sentence
types, etc.) and user-specific adjustments (e.g., personalized
phrase disambiguation, local dialect, etc.) that might have not
been part of the initial version of the meta-level knowledge.
3</p>
        </sec>
      </sec>
      <sec id="sec-2-2">
        <title>Overview of Related Work</title>
        <p>Our primary focus is on translating natural language advice
or questions into symbolic form. A lot of work has been done
in information and rule extraction from natural text, using
ontologies or other scope-related information [Delannoy et al.,
1993; Dragoni et al., 2016; Hassanpour et al., 2011]. Unlike
that work, our methodology does not seek to find important
words / phrases based on the scope of the application, and
no ontologies are used. Instead, we investigate the structure
and relations within natural language sentences without
taking the context into account, which has the advantage that
our methodology is application-oblivious. Although not
using ontologies or scope-related annotations might lead to vital
information being unavailable when rules are extracted, this
can be gradually remedied through Machine Coaching.</p>
        <p>Other work has also been done in symbolic analysis of
natural language text with the use of Controlled Natural
Languages (CNL) [Kuhn, 2014]. In our approach, although
similar in some ways with CNL approaches [Kain and Tompits,
2019], we do not strictly define a controlled natural language,
but instead we start by considering a basic subset of natural
language that we can then gradually and iteratively expand.</p>
        <p>Our proposed architecture starts by extracting dependency
relations between words, along with Part of Speech (POS)
and Named Entity Recognition (NER) annotations
[Grishman, 1997]. This information is subsequently used to
identify patterns in the semantic relations that hold in a sentence,
which support the construction of a rule capturing what a user
wishes to communicate through that sentence. The key
departure of our work from past works [Akbik and Broß, 2009;
Gerber and Ngomo, 2012; Bos, 2015] is that the translation
process itself is encoded as a collection of meta-rules,
constituting the meta-level knowledge of a cognitive assistant.</p>
        <p>As we have argued, this is a first step towards being able
to acquire this knowledge gradually and dynamically,
without the need of a pre-specified ontology, and in a manner that
allows the meta-level knowledge (and hence the translation
process of the object-level advice) to be amenable to
adjustment to the linguistic particularities of each human.
4</p>
      </sec>
      <sec id="sec-2-3">
        <title>Architecture and Methodology</title>
        <p>The pipeline design of the CCA’s Verbal Coaching Interface
responsible for the interpretation of a piece of advice given
by the user is shown in Figure 2. According to the pipeline,
a piece of advice given verbally by the user is parsed (in
Module 1) into a collection of predicates that describe the
dependencies in the sentence of the advice. These predicates
act as input to the meta-level knowledge of the CCA, with
which the CCA reasons (in Module 2) to turn the piece of
advice into an object-level rule, thus interpreting the advice.
This object-level rule is then used to revise, through Machine
Coaching, the CCA’s object-level knowledge, and it is this
object-level knowledge that is utilized by the CCA when
deciding how to handle future incoming calls. The details of the
Machine Coaching revision mechanism are inconsequential
for this work, and can be found elsewhere [Michael, 2019].</p>
        <p>In the remainder of this section we discuss in more detail
the key components of the architecture described above.
4.1</p>
        <p>User Language for Advice-Giving
For the purposes of this work, we restrict the types of
sentences in which object-level advice can be expressed to the
ones below [Huddleston and Pullum, 2002]. The numbers in
brackets after each sentence type are references to the
respective sentence examples that are provided in Appendix A:
1. Simple Declarative Sentences (SDS)
1.1. Simple Declarative To-Be Sentences
1.1.1. with Adjective or Noun Predicate [1–3]
1.1.2. with Verb Predicate [4–5]
1.2. Simple Declarative Verb Sentences
1.2.1. with Object [6]
1.2.2. without Object [7–9]</p>
        <sec id="sec-2-3-1">
          <title>2. Simple Imperative Sentences (SIS) [10]</title>
        </sec>
        <sec id="sec-2-3-2">
          <title>3. Conditional Sentences (zero conditional)</title>
          <p>Below SDC stands for Simple Declarative Clause
and SIC stands for Simple Imperative Clause.
3.1. Imperative Conditional Sentences (ICS)
3.1.1. If/When SDC, SIC [11–12]
3.1.2. If SDC, then SIC [13]
3.1.3. SIC if/when SDC [14]
3.1.4. all above with Implied Subject in SDC [15–16]
3.2. Declarative Conditional Sentences (DCS)
3.2.1. If/When SDC, SDC [17–19]
3.2.2. If SDC, then SDC [20]
3.2.3. SDC if/when SDC [21–22]
3.2.4. all above with Implied Subject in SDC [23]
For the purposes of simplicity, sentences with negation are
not considered in this initial set of supported sentence types.
Given an object-level advice sentence, we apply Natural
Language Processing (NLP) tools to get dependency, part of
speech, and named entity recognition data (in the form of a set
of meta-predicates) and apply dependency analysis to
manually identify patterns across these sets, which we later use
to engineer meta-rules. We have defined and consider three
types of meta-predicates for use in the body of meta-rules:
&lt; Dependency &gt; (</p>
          <p>Parent Word; Parent Word Position;</p>
          <p>Child Word; Child Word Position)
pos(&lt; POS Tag &gt;; Word; Word Position)
ner(&lt; NER Tag &gt;; Word; Word Position)
where &lt; Dependency &gt;, &lt; POS Tag &gt;, and &lt; NER Tag &gt;
are placeholders for the dependency relation between two
words, for the Part of Speech of a word, and for the Named
Entity Recognition tag of a word, respectively, as identified
by NLP. In addition to the above meta-predicates, a special
meta-predicate is used in the head of meta-rules to represent
predicates and variables of the generated object-level rule:
ruleterms(</p>
          <p>Head Terms; Head Vars;</p>
          <p>Body Terms; Body Vars)
where Head Terms and Body Terms are lists of lists that
represent object-level predicates for the head and the body of the
generated rule, respectively. Each sub-list represents a single
predicate, and each sub-list member represents a part of the
predicate (in case a predicate is a multi-word concatenation).
Head Vars and Body Vars are also lists of lists and represent
the corresponding variables of the object-level predicates.</p>
          <p>Each advice sentence is assumed, in this work, to contain
a single conditional, corresponding, thus, to a single
objectlevel rule. This single rule, however, might involve the
application of multiple meta-rules on the given advice sentence,
with each meta-rule contributing part of the generated
objectlevel rule; see, e.g., Example Sentence 2 in Section 4.3.
4.3</p>
          <p>Engineering the Meta-Level Rules
For each of the sentence types considered in Section 4.1, we
have engineered one or more associated meta-rules, which
are presented in Appendix B. Three meta-rules are discussed
below as part of two examples that demonstrate the key ideas.</p>
          <p>Each example comprises an input sentence and its type,
the intuitively expected formal rule corresponding to that
sentence, the dependency parse tree for that sentence, the
applicable meta-rules that are triggered by that sentence type, the
inferences drawn from the meta-rules, and ultimately the
generated object-level rule. Within each dependency parse tree,
the bounding boxes of the respective figures highlight the
patterns that trigger the meta-rules. Each variable W and P in
the meta-rules corresponds, respectively, to an arbitrary word
in the sentence and the word’s position in that sentence.</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Example Sentence 1: A call is important.</title>
      <p>Type: 1.1.1. Simple Declarative To-Be Sentence with
Adjective Predicate.</p>
      <p>Expected Rule: call(X) implies important(X);
Dependency Parse Tree: see Figure 3 (left).</p>
      <p>Applicable Meta-Rules:
root(root; 0; W1; P1); cop(W1; P1; be; ); nsubj(W1; P1; W2; )
implies ruleterms([[W1]]; [[X1]]; [[W2]]; [[X1]]);
Meta-Inferences:
ruleterms([[important]]; [[X1]]; [[call]]; [[X1]])
call(X1) implies important(X1);</p>
    </sec>
    <sec id="sec-4">
      <title>Example Sentence 2: Send SMS when call is important.</title>
      <p>Type: 3.1.3. Imperative Conditional Sentence (SIC when
SDC).</p>
      <p>Expected Rule:</p>
      <p>call(X); important(X); SMS(Y) implies send(Y);
Dependency Parse Tree: see Figure 3 (right).</p>
      <p>Applicable Meta-Rules:
advcl( ; ; W1; P1); cop(W1; P1; be; ); nsubj(W1; P1; W2; )
implies ruleterms([[]]; [[]]; [[W2]; [W1]]; [[X1]; [X1]]);
root(root; 0; W1; P1); pos(verb; W1; P1); dobj(W1; P1; W2; )
implies ruleterms([[W1]]; [[X2]]; [[W2]]; [[X2]]);
Meta-Inferences:
Generated Rule:
ruleterms([[]]; [[]]; [[call]; [important]]; [[X1]; [X1]])
ruleterms([[send]]; [[X2]]; [[SMS]]; [[X2]])
call(X1); important(X1); SMS(X2)
implies send(X2);</p>
      <p>In addition to the example meta-rules that are presented
above, there are versions of those meta-rules that cope with
the case of sentences with a named entity subject, since the
resulting object-level rules need to be different in such cases.</p>
      <p>In terms of the methodology followed in engineering
metarules, we note that this involved the manual analysis of
multiple sentences of each of the considered types, and the
identification and clustering of dependency patterns. In a sense, then,
we have manually-induced these meta-rules from a training
set of sentences, having in mind that the engineered
metarules are expected to apply and work well also with sentences
(of the same types) that were not part of the training set.
5</p>
      <sec id="sec-4-1">
        <title>Preliminary Empirical Evaluation</title>
        <p>Towards empirically evaluating whether the engineered
metalevel knowledge adequately addresses the task of interpreting
user advice, we have implemented in Java the pipeline
outlined in Section 4. Natural language is parsed using the
Stanford CoreNLP Library [Manning et al., 2014], the meta-level
knowledge is implemented, for quick prototyping purposes,
in SWI-Prolog, and the Java Prolog Library (JPL) is used
to interface with the meta-level knowledge. The meta-level
knowledge can be invoked to translate advice from natural
language into symbolic form through a desktop application
or a web-service call (http://cognition.ouc.ac.cy/nestor/).</p>
        <p>This position paper does not provide a full-fleshed
quantitative empirical evaluation, which is part of our ongoing
work. However, as part of a preliminary evaluation, we
created a custom data-set that includes at least one sentence of
each type considered, along with its variants, and applied our
engineered meta-level knowledge on that data-set. The list of
the generated object-level rules can be found in Appendix A.
That outcome was evaluated by the authors qualitatively, and
below we present some early findings from this evaluation:
• The generated object-level rules reasonably capture the
meaning of their associated natural language sentences.
• In some cases, slightly altering the syntax of a sentence
(e.g., removing the comma from the sentence “If John is
busy, send SMS”), alters the produced dependency tree.
• Simple sentences with omitted words and implied
meanings cannot be handled appropriately.
• NLP sometimes labels incorrectly words with multiple
meanings, resulting in incorrect dependency trees.
6</p>
      </sec>
      <sec id="sec-4-2">
        <title>Conclusion and Future Work</title>
        <p>We have presented our progress in developing a translation
process from natural language into symbolic form that is,
itself, expressible in the same symbolic form. Even though
the meta-level knowledge developed in the context of this
paper was engineered, our initial results provide some
confidence that the meta-level knowledge itself could be acquired
through a process of Machine Coaching, as is the case with
the object-level knowledge. This, in turn, suggests an
affirmative answer to the two questions posed in this paper,
providing for a translation process that is amenable to dynamic
change in an elaboration tolerant manner [McCarthy, 1998].</p>
        <p>Towards solidifying our initial work and findings, we are
currently in the process of porting the meta-level knowledge
from SWI-Prolog to Prudens, a first-order logic language and
framework (http://cognition.ouc.ac.cy/prudens/) with
semantics that supports directly the paradigm of Machine Coaching.
This will be followed by the undertaking of a quantitative
empirical study to evaluate the efficiency, effectiveness, and
explainability of a machine-coachable translation process.</p>
        <p>
          More specific goals to be pursued in future work include:
• demonstrating the feasibility of machine coaching for the
object-level task by utilizing engineered meta-rules;
• evaluating more thoroughly the ability of our engineered
meta-level knowledge to cope with a varied collection of
advice sentences of the types already supported;
• examining how the sentence types that are supported can be
extended
          <xref ref-type="bibr" rid="ref11 ref13 ref15">(e.g., by supporting negation, logical connectors,
implied meanings, pronoun disambiguation, textual
entailment [Levesque et al., 2012; Korman et al., 2018])</xref>
          ;
• supporting other natural languages, and quantifying the
extent to which meta-level knowledge developed for one
natural language can be transferred (even partially) to another;
• experimenting with the use of machine learning techniques
for the acquisition of meta-level knowledge;
• acquiring meta-level knowledge through a Verbal Coaching
Interface, analogously to the acquisition method of
objectlevel knowledge that is described in this paper.
        </p>
        <p>The last direction from above raises, in fact, certain
interesting theoretical questions: Is there a meta-language rich
enough to support its own verbal-coachability without the
need for the involvement of experts? Is there a way to decide
or to prove this claim? Or is there a need for an infinite
hierarchy of meta-languages needed to support the verbal-coaching
of knowledge at lower levels of the meta-language hierarchy?</p>
        <p>Even without concrete answers to these theoretical
questions, central in our future work is to include a
knowledgebased translation process in a cognitive assistant that can be
verbally-coached, and which will also support the
machinecoaching (even if not the verbal-coaching) of the translation
process itself, so that the latter can improve over time and
adapt to the linguistic particularities of its current user.</p>
      </sec>
      <sec id="sec-4-3">
        <title>Acknowledgements</title>
        <p>This work was supported by funding from the EU’s Horizon
2020 Research and Innovation Programme under grant
agreements no. 739578 and no. 823783, and from the Government
of the Republic of Cyprus through the Deputy Ministry of
Research, Innovation, and Digital Policy.</p>
        <p>A</p>
      </sec>
      <sec id="sec-4-4">
        <title>Evaluation Data and Results</title>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>Sentence 1: Students are smart.</title>
      <p>Type: 1.1.1. Simple Declarative To-Be Sentences with
Adjective or Noun Predicate.</p>
      <p>Generated Rule:</p>
      <p>student(X1) implies smart(X1);</p>
    </sec>
    <sec id="sec-6">
      <title>Sentence 2: Eagle is a bird.</title>
      <p>Type: 1.1.1. Simple Declarative To-Be Sentences with
Adjective or Noun Predicate.</p>
      <p>Generated Rule:</p>
      <p>bird(eagle);</p>
    </sec>
    <sec id="sec-7">
      <title>Sentence 3: Gifts from my wife are precious.</title>
      <p>Type: 1.1.1. Simple Declarative To-Be Sentences with
Adjective or Noun Predicate (as part of a complex sentence).
Generated Rule:
gift(X1) implies precious(X1);</p>
    </sec>
    <sec id="sec-8">
      <title>Sentence 4: John is working.</title>
      <p>Type: 1.1.2. Simple Declarative To-Be Sentences with Verb
Predicate.</p>
      <p>Generated Rule:</p>
      <p>work(john);</p>
    </sec>
    <sec id="sec-9">
      <title>Sentence 5: Students are studying.</title>
      <p>Type: 1.1.2. Simple Declarative To-Be Sentences with Verb
Predicate.</p>
      <p>Generated Rule:</p>
      <p>student(X1) implies study(X1);</p>
    </sec>
    <sec id="sec-10">
      <title>Sentence 6: Birds have wings.</title>
      <p>Type: 1.2.1. Simple Declarative Verb Sentences with Object.
Generated Rule:</p>
      <p>birds(X2); wing(X1) implies have(X2; X1);</p>
    </sec>
    <sec id="sec-11">
      <title>Sentence 7: A man cries.</title>
      <p>Type: 1.2.2. Simple Declarative Verb Sentences without
Object.</p>
      <p>Generated Rule:</p>
      <p>man(X1) implies cry(X1);</p>
    </sec>
    <sec id="sec-12">
      <title>Sentence 8: John speaks.</title>
      <p>Type: 1.2.2. Simple Declarative Verb Sentences without
Object.</p>
      <p>Generated Rule:</p>
      <p>speak(john);</p>
    </sec>
    <sec id="sec-13">
      <title>Sentence 9: Eagles fly high.</title>
      <p>Type: 1.2.2. Simple Declarative Verb Sentences without
Object (as part of a complex sentence).</p>
      <p>Generated Rule:</p>
      <p>eagles(X1) implies fly(X1);</p>
    </sec>
    <sec id="sec-14">
      <title>Sentence 10: Answer the phone.</title>
      <sec id="sec-14-1">
        <title>Type: 2. Simple Imperative Sentences.</title>
        <p>Generated Rule:</p>
        <p>phone(X1) implies answer(X1);</p>
      </sec>
    </sec>
    <sec id="sec-15">
      <title>Sentence 11: If John is busy, send SMS.</title>
      <p>Type: 3.1.1. Imperative Conditional Sentences (If/When
SDC 1.1.1, SIC).</p>
      <p>Generated Rule:</p>
      <p>busy(john); sms(X1) implies send(X1);</p>
    </sec>
    <sec id="sec-16">
      <title>Sentence 12: If John is working, send SMS.</title>
      <p>Type: 3.1.1. Imperative Conditional Sentences (If/When
SDC 1.1.2, SIC).</p>
      <p>Generated Rule:
work(john); sms(X1) implies send(X1);</p>
    </sec>
    <sec id="sec-17">
      <title>Sentence 13: If John has a phone, then send SMS.</title>
      <p>Type: 3.1.2. Imperative Conditional Sentences (If SDC 1.2.1,
then SIC).</p>
    </sec>
    <sec id="sec-18">
      <title>Sentence 20: If John is rich, then Anna has money.</title>
      <p>Type: 3.2.2. Declarative Conditional Sentences (If SDC
1.1.1, then SDC 1.2.1).</p>
      <p>Generated Rule:
rich(john); money(X1) implies have(anna; X1);</p>
    </sec>
    <sec id="sec-19">
      <title>Sentence 21: Dad is happy when paint shines.</title>
      <p>Generated Rule:</p>
      <p>paint(X3); shine(X3); dad(X1) implies happy(X1);</p>
    </sec>
    <sec id="sec-20">
      <title>Sentence 22: Dad washes the car when the sun shines.</title>
      <p>Type: 3.2.3. Declarative Conditional Sentences (SDC 1.2.1
if/when SDC 1.2.2).</p>
      <p>Generated Rule:
sun(X3); shine(X3); car(X1); dad(X2)
implies wash(X2; X1);</p>
    </sec>
    <sec id="sec-21">
      <title>Sentence 23: When tired, head is aching.</title>
      <p>Type: 3.2.4. Declarative Conditional Sentences (If/When
SDC with implied Subject, SDC 1.1.2).</p>
      <p>Generated Rule:</p>
      <p>tired; head(X1) implies ache(X1);
B Engineered Meta-Level Knowledge
:- dynamic pos/3. :- dynamic ne/3.
:- dynamic root/4. :- dynamic det/4.
:- dynamic dobj/4. :- dynamic advmod/4.
:- dynamic advcl/4. :- dynamic punct/4.
:- dynamic nsubj/4. :- dynamic cop/4.
:- dynamic mark/4. :- dynamic case/4.
:- dynamic nmod/4. :- dynamic amod/4.
:- dynamic nmod_poss/4. :- dynamic aux/4.
:- dynamic ccomp/4. :- dynamic ner/3.
%Utilities %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% addvarlist - Add indexed var list
addvarlist(0, _, []).
addvarlist(N, OFFSET, [[VARNAME]|BASEVLL])
:N &gt; 0,
VARINDEX is OFFSET + N,
string_concat(‘X’, VARINDEX, VARNAME),
PREVN is N - 1,
addvarlist(PREVN, OFFSET, BASEVLL).
% addvar - Add indexed var
addvar(0, _, []).
addvar(N, OFFSET, [VARNAME|BASEVL])
:N &gt; 0,
VARINDEX is OFFSET + N,
string_concat(‘X’, VARINDEX, VARNAME),
PREVN is N - 1,
addvar(PREVN, OFFSET, BASEVL).
% Meta-Level Knowledge Definition %%%%%%%%%%%
% Simple TO BE Clauses w. adj. or noun pred.
sdc(_, W1, [[W1]], [[W2]], [[]], [[]])
:cop(W1, P1, be, _),
nsubj(W1, P1, W2, P2), ner(_, W2, P2), !.
sds(HTL, HVL, BTL, BVL)
:root(root, 0, W1, _),
sdvc(0, W1, HTL, HVL, BTL, BVL).
% Simple IF conditional clause A
scc([[]], [[]], BTL, BVL)
:mark(W1, P1, ‘if’, _), advcl(_, _, W1, P1),
sdvc(2, W1, [[W1]], CHVL, CBTL, CBVL),
append(CBTL, [[W1]], BTL),
append(CBVL, CHVL, BVL), !.
% Simple WHEN conditional clause A
scc([[]], [[]], BTL, BVL)
:advmod(W1, P1, when, _), advcl(_, _, W1, P1),
sdvc(2, W1, [[W1]], CHVL, CBTL, CBVL),
append(CBTL, [[W1]], BTL),
append(CBVL, CHVL, BVL), !.
% Simple IF conditional clause B
scc([[]], [[]], BTL, BVL)
:mark(W1, P1, ‘if’, _), advcl(_, _, W1, P1),
sdc(1, W1, [[W1]], CHVL, CBTL, CBVL),
append(CBTL, [[W1]], BTL),
append(CBVL, CHVL, BVL), !.
% Simple WHEN conditional clause B
scc([[]], [[]], BTL, BVL)
:advmod(W1, P1, when, _), advcl(_, _, W1, P1),
sdc(1, W1, [[W1]], CHVL, CBTL, CBVL),
append(CBTL, [[W1]], BTL),
append(CBVL, CHVL, BVL), !.
% Simple IF cond. clause w. implied subject
scc([[]], [[]], [[W1]], [[]])
:mark(W1, P1, ‘if’, _), advcl(_, _, W1, P1), !.
% Simple WHEN cond. clause w. implied subject
scc([[]], [[]], [[W1]], [[]])
:advmod(W1, P1, when, _), advcl(_, _, W1, P1).
ruleterms(HTL, HVL, BTL, BVL)
:scc(HTL, HVL, BTL, BVL).
ruleterms(HTL, HVL, BTL, BVL)
:sds(HTL, HVL, BTL, BVL), !.
ruleterms(HTL, HVL, BTL, BVL)
:sis(HTL, HVL, BTL, BVL).</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          <source>[Akbik and Broß</source>
          , 2009]
          <article-title>Alan Akbik and Ju¨gen Broß. Wanderlust: Extracting Semantic Relations from Natural Language Text using Dependency Grammar Patterns</article-title>
          .
          <source>In Proceedings of the WWW Workshop on Semantic Search</source>
          , volume
          <volume>48</volume>
          ,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          <source>[Bernard and Arnold</source>
          , 2019]
          <string-name>
            <given-names>Denys</given-names>
            <surname>Bernard</surname>
          </string-name>
          and
          <string-name>
            <given-names>Alexandre</given-names>
            <surname>Arnold</surname>
          </string-name>
          .
          <article-title>Cognitive Interaction with Virtual Assistants: From Philosophical Foundations to Illustrative Examples in Aeronautics</article-title>
          . Computers in Industry,
          <volume>107</volume>
          :
          <fpage>33</fpage>
          -
          <lpage>49</lpage>
          ,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          <source>[Bos</source>
          , 2015]
          <string-name>
            <given-names>Johan</given-names>
            <surname>Bos</surname>
          </string-name>
          .
          <article-title>Open-Domain Semantic Parsing with Boxer</article-title>
          .
          <source>In Proceedings of the 20th Nordic Conference of Computational Linguistics</source>
          , pages
          <fpage>301</fpage>
          -
          <lpage>304</lpage>
          ,
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [Delannoy et al.,
          <year>1993</year>
          ] Jean Franc¸ois Delannoy, C. Feng, Stan Matwin, and
          <string-name>
            <given-names>Stan</given-names>
            <surname>Szpakowicz</surname>
          </string-name>
          .
          <article-title>Knowledge Extraction from Text: Machine Learning for Text-to-Rule Translation</article-title>
          .
          <source>In Proceedings of the ECML Workshop on Machine Learning and Text Analysis</source>
          , pages
          <fpage>7</fpage>
          -
          <lpage>13</lpage>
          ,
          <year>1993</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [Dragoni et al.,
          <year>2016</year>
          ]
          <string-name>
            <given-names>Mauro</given-names>
            <surname>Dragoni</surname>
          </string-name>
          , Serena Villata,
          <string-name>
            <given-names>Williams</given-names>
            <surname>Rizzi</surname>
          </string-name>
          , and
          <string-name>
            <given-names>Guido</given-names>
            <surname>Governatori</surname>
          </string-name>
          .
          <article-title>Combining NLP Approaches for Rule Extraction from Legal Documents</article-title>
          .
          <source>In Proceedings of the 1st Workshop on Mining and Reasoning with Legal Texts</source>
          ,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          <source>[Gerber and Ngomo</source>
          , 2012]
          <article-title>Daniel Gerber and AxelCyrille Ngonga Ngomo</article-title>
          .
          <article-title>Extracting Multilingual Natural-Language Patterns for RDF Predicates</article-title>
          .
          <source>In Proceedings of the 18th International Conference on Knowledge Engineering and Knowledge Management</source>
          , pages
          <fpage>87</fpage>
          -
          <lpage>96</lpage>
          . Springer,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          <source>[Grishman</source>
          , 1997]
          <string-name>
            <given-names>Ralph</given-names>
            <surname>Grishman</surname>
          </string-name>
          . Information Extraction:
          <article-title>Techniques and Challenges</article-title>
          .
          <source>In Proceedings of the International Summer School on Information Extraction</source>
          , pages
          <fpage>10</fpage>
          -
          <lpage>27</lpage>
          . Springer,
          <year>1997</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          <source>[Huddleston and Pullum</source>
          , 2002]
          <string-name>
            <given-names>Rodnry</given-names>
            <surname>Huddleston and Geoffrey</surname>
          </string-name>
          <string-name>
            <given-names>K.</given-names>
            <surname>Pullum</surname>
          </string-name>
          .
          <source>The Cambridge Grammar of the English Language. Language</source>
          . Cambridge: Cambridge University Press,
          <year>2002</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          <source>[Kain and Tompits</source>
          , 2019]
          <string-name>
            <given-names>Tobias</given-names>
            <surname>Kain</surname>
          </string-name>
          and
          <string-name>
            <given-names>Hans</given-names>
            <surname>Tompits</surname>
          </string-name>
          .
          <article-title>Uhura: An Authoring Tool for Specifying Answer-Set Programs Using Controlled Natural Language</article-title>
          .
          <source>In Proceedings of the 16th European Conference on Logics in Artificial Intelligence</source>
          , pages
          <fpage>559</fpage>
          -
          <lpage>575</lpage>
          . Springer,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          <source>[Kamath and Das</source>
          , 2019]
          <string-name>
            <given-names>Aishwarya</given-names>
            <surname>Kamath and Rajarshi Das</surname>
          </string-name>
          .
          <article-title>A Survey on Semantic Parsing</article-title>
          .
          <source>In Proceedings of the 1st Conference on Automated Knowledge Base Construction</source>
          ,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [Korman et al.,
          <year>2018</year>
          ] Daniel
          <string-name>
            <given-names>Z.</given-names>
            <surname>Korman</surname>
          </string-name>
          , Eric Mack, Jacob Jett, and
          <string-name>
            <surname>Allen</surname>
            <given-names>H.</given-names>
          </string-name>
          <string-name>
            <surname>Renear</surname>
          </string-name>
          .
          <article-title>Defining Textual Entailment</article-title>
          .
          <source>Journal of the Association for Information Science and Technology</source>
          ,
          <volume>69</volume>
          (
          <issue>6</issue>
          ):
          <fpage>763</fpage>
          -
          <lpage>772</lpage>
          ,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          <source>[Kuhn</source>
          , 2014]
          <string-name>
            <given-names>Tobias</given-names>
            <surname>Kuhn</surname>
          </string-name>
          .
          <article-title>A Survey and Classification of Controlled Natural Languages</article-title>
          .
          <source>Computational Linguistics</source>
          ,
          <volume>40</volume>
          (
          <issue>1</issue>
          ):
          <fpage>121</fpage>
          -
          <lpage>170</lpage>
          ,
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [Levesque et al.,
          <year>2012</year>
          ]
          <string-name>
            <given-names>Hector</given-names>
            <surname>Levesque</surname>
          </string-name>
          , Ernest Davis, and
          <string-name>
            <given-names>Leora</given-names>
            <surname>Morgenstern</surname>
          </string-name>
          .
          <article-title>The Winograd Schema Challenge</article-title>
          .
          <source>In Proceedings of the 13th International Conference on the Principles of Knowledge Representation and Reasoning</source>
          ,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          <source>[Leviathan and Matias</source>
          , 2018]
          <string-name>
            <given-names>Yaniv</given-names>
            <surname>Leviathan</surname>
          </string-name>
          and
          <string-name>
            <given-names>Yossi</given-names>
            <surname>Matias</surname>
          </string-name>
          .
          <article-title>Google Duplex: An AI System for Accomplishing Real-World Tasks Over the Phone</article-title>
          .
          <source>Technical report, Google AI Blog</source>
          ,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [MacLellan et al.,
          <year>2018</year>
          ]
          <string-name>
            <given-names>Christopher J. MacLellan</given-names>
            , Erik Harpstead, Robert P.
            <surname>Marinier</surname>
          </string-name>
          <string-name>
            <given-names>III</given-names>
            , and
            <surname>Kenneth</surname>
          </string-name>
          <string-name>
            <given-names>R.</given-names>
            <surname>Koedinger</surname>
          </string-name>
          .
          <article-title>A Framework for Natural Cognitive System Training Interactions</article-title>
          .
          <source>Advances in Cognitive Systems</source>
          ,
          <volume>6</volume>
          :
          <fpage>1</fpage>
          -
          <lpage>16</lpage>
          ,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [Manning et al.,
          <year>2014</year>
          ]
          <string-name>
            <given-names>Christopher D.</given-names>
            <surname>Manning</surname>
          </string-name>
          , Mihai Surdeanu, John Bauer, Jenny Rose Finkel, Steven Bethard, and
          <string-name>
            <surname>David McClosky. The Stanford CoreNLP Natural Language Processing</surname>
          </string-name>
          <article-title>Toolkit</article-title>
          .
          <source>In Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations</source>
          , pages
          <fpage>55</fpage>
          -
          <lpage>60</lpage>
          ,
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          <source>[McCarthy</source>
          ,
          <year>1959</year>
          ]
          <article-title>John McCarthy. Programs with Common Sense</article-title>
          .
          <source>In Proceedings of the Teddington Conference on the Mechanization of Thought Processes</source>
          , pages
          <fpage>75</fpage>
          -
          <lpage>91</lpage>
          ,
          <year>1959</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          <source>[McCarthy</source>
          ,
          <year>1989</year>
          ]
          <string-name>
            <given-names>John</given-names>
            <surname>McCarthy. Artificial Intelligence</surname>
          </string-name>
          ,
          <article-title>Logic and Formalizing Common Sense</article-title>
          .
          <source>In Philosophical Logic and Artificial Intelligence</source>
          , pages
          <fpage>161</fpage>
          -
          <lpage>190</lpage>
          . Springer,
          <year>1989</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          <source>[McCarthy</source>
          ,
          <year>1998</year>
          ]
          <string-name>
            <given-names>John McCarthy. Elaboration</given-names>
            <surname>Tolerance</surname>
          </string-name>
          .
          <source>In Proceedings of the 4th International Symposium on Logical Formalizations of Commonsense Reasoning</source>
          , pages
          <fpage>198</fpage>
          -
          <lpage>216</lpage>
          ,
          <year>1998</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          <source>[Michael and Valiant</source>
          , 2008]
          <string-name>
            <given-names>Loizos</given-names>
            <surname>Michael</surname>
          </string-name>
          and
          <string-name>
            <given-names>Leslie G.</given-names>
            <surname>Valiant. A First Experimental</surname>
          </string-name>
          <article-title>Demonstration of Massive Knowledge Infusion</article-title>
          .
          <source>In Proceedings of the 11th International Conference on Principles of Knowledge Representation and Reasoning</source>
          , pages
          <fpage>378</fpage>
          -
          <lpage>388</lpage>
          . AAAI Press,
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          <source>[Michael</source>
          , 2016]
          <string-name>
            <given-names>Loizos</given-names>
            <surname>Michael</surname>
          </string-name>
          .
          <article-title>Cognitive Reasoning and Learning Mechanisms</article-title>
          .
          <source>In Proceedings of the 4th International Workshop on Artificial Intelligence and Cognition</source>
          , volume
          <volume>1895</volume>
          , pages
          <fpage>2</fpage>
          -
          <lpage>23</lpage>
          . CEUR-WS.org,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          <source>[Michael</source>
          , 2017]
          <string-name>
            <given-names>Loizos</given-names>
            <surname>Michael</surname>
          </string-name>
          .
          <source>The Advice Taker 2.0. In Proceedings of the 13th International Symposium on Commonsense Reasoning</source>
          , volume
          <year>2052</year>
          .
          <article-title>CEUR-WS</article-title>
          .org,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          <source>[Michael</source>
          , 2019]
          <string-name>
            <given-names>Loizos</given-names>
            <surname>Michael</surname>
          </string-name>
          .
          <article-title>Machine Coaching</article-title>
          .
          <source>In Proceedings of the IJCAI Workshop on Explainable Artificial Intelligence</source>
          , pages
          <fpage>80</fpage>
          -
          <lpage>86</lpage>
          ,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [Sap et al.,
          <year>2019</year>
          ]
          <string-name>
            <given-names>Maarten</given-names>
            <surname>Sap</surname>
          </string-name>
          , Ronan Le Bras, Emily Allaway, Chandra Bhagavatula, Nicholas Lourie,
          <string-name>
            <given-names>Hannah</given-names>
            <surname>Rashkin</surname>
          </string-name>
          , Brendan Roof,
          <article-title>Noah A Smith, and Yejin Choi. Atomic: An Atlas of Machine Commonsense for If-Then Reasoning</article-title>
          .
          <source>In Proceedings of the 33rd AAAI Conference on Artificial Intelligence</source>
          , volume
          <volume>33</volume>
          , pages
          <fpage>3027</fpage>
          -
          <lpage>3035</lpage>
          ,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          <source>[Valiant</source>
          , 2006]
          <string-name>
            <given-names>Leslie G. Valiant. Knowledge</given-names>
            <surname>Infusion</surname>
          </string-name>
          .
          <source>In Proceedings of the 21st AAAI Conference on Artificial Intelligence</source>
          , volume
          <volume>6</volume>
          , pages
          <fpage>1546</fpage>
          -
          <lpage>1551</lpage>
          . AAAI Press,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          <source>Type: 3.2.3. Declarative Conditional Sentences (SDC 1.1.1 if/when SDC 1.2</source>
          .2).
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          <article-title>sdc(OFFSET, W1</article-title>
          , [[W1]],
          <string-name>
            <surname>HVL</surname>
          </string-name>
          , [[W2]], BVL) :
          <article-title>- cop(W1, P1</article-title>
          , be, _),
          <source>nsubj(W1</source>
          ,
          <fpage>P1</fpage>
          ,
          <fpage>W2</fpage>
          , _),
          <source>addvarlist(1</source>
          ,
          <string-name>
            <surname>OFFSET</surname>
          </string-name>
          , HVL),
          <source>addvarlist(1</source>
          ,
          <string-name>
            <surname>OFFSET</surname>
          </string-name>
          , BVL).
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          <article-title>% Simple TO BE Clauses with verb Predicate sdc(_, W1</article-title>
          , [[W1]], [[W2]], [[]], [[]]) :
          <article-title>- aux(W1, P1</article-title>
          , be, _),
          <article-title>nsubj(W1, P1, W2, P2), ner(_, W2</article-title>
          , P2), !.
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          <article-title>sdc(OFFSET, W1</article-title>
          , [[W1]],
          <string-name>
            <surname>HVL</surname>
          </string-name>
          , [[W2]], BVL) :
          <article-title>- aux(W1, P1</article-title>
          , be, _),
          <source>nsubj(W1</source>
          ,
          <fpage>P1</fpage>
          ,
          <fpage>W2</fpage>
          , _),
          <source>addvarlist(1</source>
          ,
          <string-name>
            <surname>OFFSET</surname>
          </string-name>
          , HVL),
          <source>addvarlist(1</source>
          ,
          <string-name>
            <surname>OFFSET</surname>
          </string-name>
          , BVL),
          <year>!</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          <article-title>% Simple Transitive Verb Clauses sdvc( _, W1</article-title>
          , [[W1]], [[W2, W3]], [[]], [[]]) :
          <article-title>- pos(TAG, W1, P1), sub_string(TAG, 0</article-title>
          , _, _, 'vb' ),
          <article-title>nsubj(W1, P1, W2, P2), ner(_, W2, P2), dobj(W1, P1, W3, P3), ner(_, W3</article-title>
          , P3), !.
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          <article-title>sdvc( OFFSET, W1</article-title>
          , [[W1]], [[W2|HVL]], [[W3]], BVL) :
          <article-title>- pos(TAG, W1, P1), sub_string(TAG, 0</article-title>
          , _, _, 'vb'),
          <article-title>nsubj(W1, P1, W2, P2), ner(_, W2, P2), dobj(W1, P1</article-title>
          ,
          <fpage>W3</fpage>
          , _),
          <source>addvar(1</source>
          ,
          <string-name>
            <surname>OFFSET</surname>
          </string-name>
          , HVL),
          <source>addvarlist(1</source>
          ,
          <string-name>
            <surname>OFFSET</surname>
          </string-name>
          , BVL),
          <year>!</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref32">
        <mixed-citation>
          <article-title>sdvc( OFFSET, W1</article-title>
          , [[W1]], [HVL], [[W2]], BVL) :
          <article-title>- pos(TAG, W1, P1), sub_string(TAG, 0</article-title>
          , _, _, 'vb'),
          <article-title>nsubj(W1, P1, W2, _), dobj(W1, P1, W3, P3), ner(_, W3, P3), addvar(1, OFFSET</article-title>
          , PHVL),
          <source>append(PHVL</source>
          , [W3], HVL),
          <source>addvarlist(1</source>
          ,
          <string-name>
            <surname>OFFSET</surname>
          </string-name>
          , BVL),
          <year>!</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref33">
        <mixed-citation>
          <article-title>sdvc( OFFSET, W1</article-title>
          , [[W1]], [HVL], [[W2], [W3]], BVL) :
          <article-title>- pos(TAG, W1, P1), sub_string(TAG, 0</article-title>
          , _, _, 'vb' ),
          <source>nsubj(W1</source>
          ,
          <fpage>P1</fpage>
          ,
          <fpage>W2</fpage>
          , _),
          <source>dobj(W1</source>
          ,
          <fpage>P1</fpage>
          ,
          <fpage>W3</fpage>
          , _),
          <source>addvar(2</source>
          ,
          <string-name>
            <surname>OFFSET</surname>
          </string-name>
          , HVL),
          <source>addvarlist(2</source>
          ,
          <string-name>
            <surname>OFFSET</surname>
          </string-name>
          , BVL),
          <year>!</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref34">
        <mixed-citation>
          <article-title>% Simple Verb Clauses sdvc(_, W1</article-title>
          , [[W1]], [[W2]], [[]], [[]]) :
          <article-title>- pos(TAG, W1, P1), sub_string(TAG, 0</article-title>
          , _, _, 'vb' ),
          <article-title>nsubj(W1, P1, W2, P2), ner(_, W2</article-title>
          , P2), !.
        </mixed-citation>
      </ref>
      <ref id="ref35">
        <mixed-citation>
          <article-title>sdvc(OFFSET, W1</article-title>
          , [[W1]],
          <string-name>
            <surname>HVL</surname>
          </string-name>
          , [[W2]], BVL) :
          <article-title>- pos(TAG, W1, P1), sub_string(TAG, 0</article-title>
          , _, _, 'vb' ),
          <source>nsubj(W1</source>
          ,
          <fpage>P1</fpage>
          ,
          <fpage>W2</fpage>
          , _),
          <source>addvarlist(1</source>
          ,
          <string-name>
            <surname>OFFSET</surname>
          </string-name>
          , HVL),
          <source>addvarlist(1</source>
          ,
          <string-name>
            <surname>OFFSET</surname>
          </string-name>
          , BVL).
        </mixed-citation>
      </ref>
      <ref id="ref36">
        <mixed-citation>
          <article-title>% Simple Imperative Clauses with subject sic([</article-title>
          [W1]], [['X1']], [[W2]], [['X1']]) :
          <article-title>- pos(TAG, W1, P1), % Simple Imperative Sentences sis(HTL, HVL</article-title>
          , BTL, BVL) :
          <article-title>- root(root, 0</article-title>
          ,
          <issue>W1</issue>
          , _), sic([[W1]],
          <string-name>
            <surname>HVL</surname>
          </string-name>
          , BTL, BVL), append([], [[W1]], HTL).
        </mixed-citation>
      </ref>
      <ref id="ref37">
        <mixed-citation>
          <article-title>% Simple Declarative Sentences sds(HTL, HVL</article-title>
          , BTL, BVL) :
          <article-title>- root(root, 0</article-title>
          ,
          <issue>W1</issue>
          , _),
          <source>sdc(0</source>
          ,
          <issue>W1</issue>
          ,
          <string-name>
            <surname>HTL</surname>
          </string-name>
          , HVL, BTL, BVL),
          <year>!</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>