<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>Workshop “From Objects to Agents”, September</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>A Reactive Cognitive Architecture based on Natural Language Processing for the task of Decision-Making using a Rich Semantic</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Carmelo Fabio Longo</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Francesco Longo</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Corrado Santoro</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Engineering, University of Messina, Contrada di Dio</institution>
          ,
          <addr-line>S. Agata, 98166 Messina</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Department of Mathematics and Computer Science, University of Catania</institution>
          ,
          <addr-line>Viale Andrea Doria, 6, 95125 Catania</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2020</year>
      </pub-date>
      <volume>1</volume>
      <fpage>4</fpage>
      <lpage>16</lpage>
      <abstract>
        <p>The field of cognitive architectures is rich of approaches featuring a wide range of typical abilities of human mind, like perception, action selection, learning, reasoning, meta-reasoning and others. However, those leveraging Natural Language Processing are quite limited in both domain and reasoning capabilities. In this work, we present a cognitive architecture called CASPAR, based on a Belief-Desire-Intention framework, capable of reactive reasoning using a highly descriptive semantic made of First Order Logic predicates parsed from natural language utterances.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Cognitive Architecture</kwd>
        <kwd>Natural Language Processing</kwd>
        <kwd>Artificial Intelligence</kwd>
        <kwd>First Order Logic</kwd>
        <kwd>Internet of Things</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>In the last decade, a large number of devices connected together and controlled by AI has entered
in millions of houses: the pervasive market of Internet of Things (IoT). Such a phenomenon is
extended also in domains other than the domestic one, such as smart cities, remote e-healthcare,
industrial automation, and so on. In most of them, especially the usual domestic ones, vocal
assistants assume an important role, because voice is the most natural way to give the user the
feeling to deal with an intelligent sentient being who cares about the proper functioning of
the home environment. But how intelligent are these vocal assistants actually? Although there
can be more definitions of intelligence, in this work we are interested only in those related to
autonomous agents acting in the scope of decision-making.</p>
      <p>Nowadays, companies producing vocal assistants aim more at increasing their pervasiveness
than at improving their native reasoning capabilities; with reasoning capabilities, we can intend
not only the ability to infer the proper association command → plan from utterances, but also
to be capable of combining facts with rules in order to infer new knowledge and help the user
in decision-making tasks.</p>
      <p>
        Except the well known cloud-based vocal assistants [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], other kind of solutions [
        <xref ref-type="bibr" rid="ref2 ref3 ref4">2, 3, 4</xref>
        ]
are based on neural models exclusively trained on the domotic domain; or they exploit chat
engines [
        <xref ref-type="bibr" rid="ref5 ref6">5, 6</xref>
        ] whose understanding skills are strictly depending on syntax. This makes the
range of their capabilities quite limited.
      </p>
      <p>
        In light of the above, in this paper our aim is the design of a cognitive architecture, called
CASPAR, based on Natural Language Processing (NLP), that makes it possible the implementation
of intelligent agents able to outclass the available ones in performing deductive activities. Such
agents could be used for both domotic purposes and any other kind of applications involving
common deductive processes based on natural language. As a further motivation, we have to
highlight that, as claimed in [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], cognitive architectures have been so far mainly used as research
tools, and very few of them have been developed outside of academia; moreover, none of them
has been specifically designed for IoT. Of course, most of them have features and resources
which could be exploited in such a domain, but the starting motivations were diferent from
ours.
      </p>
      <p>Although cognitive architectures should be distinguished from models that implement them,
our architecture can be used as domotic agent as is, after the definitions of both the involved
entities and the I/O interfaces.</p>
      <p>This paper is structured as follows: Section 2 describes the state of the art of related literature;
Section 3 shows in detail all the architecture’s components and underlying modules; Section 4
shows the architecture reasoning heuristic in the presence of clauses made of composite
predicates, taking into account possible argument substitutions as well; Section 5 summarizes the
content of the paper and provides our conclusions, together with future work perspectives.
A Python implementation of CASPAR is also provided for research purposes in a Github
repository1.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related work</title>
      <p>
        The number of existing cognitive architectures has reached several hundreds according to
the authors of [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. Among the most popular ones, which also influenced several subsequent
works, there are SOAR, CLARION and LIDA, mentioned in a theoretical comparison in [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ].
Most of them got inspired either by neuroscience or psychanalysis/philosophy studies; the
former are surely less fancy, being supported by scientific data regarding functions of brain
modules in specific conditions and their interactions. The Integrated Information Theory [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]
provides even a metric Phi to evaluate the consciousness level of a cognitive system, which
would be proportional to those overall interactions. In this section, we will focus mostly on
those architectures implementing Reasoning/Action Selection, Natural Language Processing
and Decision-Making, being the main basis on which CASPAR has been built.
      </p>
      <p>
        In [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] the authors describe three diferent spoken dialog systems, one of them based on the
FORR architecture and designed to fulfill the task of ordering books from the public library
by phone. All the three dialog systems are based on a local Speech-to-Text engine called
PocketSphinx which is notoriously less performing than cloud-based systems [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]. This leads
to a greater struggle to reduce the bias between user’s request and result.
      </p>
      <p>
        The authors of [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ] present a computational model called MoralIDM, which integrates
multiple AI techniques to model human moral decision-making, by leveraging a two-layer
      </p>
      <p>ASR
Dependency</p>
      <p>Parser
MST Builder
FOL Builder
Translation Service</p>
      <p>Direct commands</p>
      <p>Parser
Routines Parser</p>
      <p>Beliefs KB
Sensor</p>
      <p>Instances</p>
      <p>STT
Front-End</p>
      <p>Definite clauses</p>
      <p>Builder</p>
      <p>Reactive Reasoner
Physical
Sensors</p>
      <p>Devices
Devices Groups
SmarItnEtenrvfaircoenment
Clauses KB
FOL Reasoner
Cognitive Reasoner
Uniquezer</p>
      <p>PHIDIAS Engine
Smart
Home
inference engine which takes into account prior cases decisions and a knowledge base with a
formal representation of moral quality-weighted facts. Such facts are extracted from natural
language by using a semi-automatic translator from simplified English (which is the major
weakness of such approach) scenarios into predicate calculus.</p>
      <p>
        The DIARC architecture [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] has been designed for addressing the issue of recognizing morally
and socially charged situations in human-robot collaborations. Although it exploits several
well known NLP resources (such as Sphinx, Verbnet, and Framenet), it has been tested only on
trivial examples in order to trigger robot reactions, using an ad-hoc symbolic representation of
both known and perceived facts.
      </p>
      <p>In general, probing the existing cognitive architectures leveraging NLP, we have found that
most of them are limited in both domain of application and in term of semantic complexity.</p>
    </sec>
    <sec id="sec-3">
      <title>3. The Architecture</title>
      <p>The name that has been chosen for the architecture presented in this paper is CASPAR. It
derives from the following words: Cognitive Architecture System Planned and Reactive, whom
summarize its two main features. In Figure 1, all interacting components are depicted, filled
with distinct colours.</p>
      <p>The main component of this architecture, namely the Reactive Reasoner, acts as "core router"
by delegating operations to other components, and providing all needed functions to make the
whole system fully operative.</p>
      <p>This architecture’s Knowledge Base (KB) is divided into two distinct parts operating separately,
which we will distinguish as Beliefs KB and Clauses KB: the former contains information of
physical entities which afect the agent and which we want the agent to afect; the latter contains
conceptual information not perceived by agent’s sensors, but on which we want the agent to
make logical inference.</p>
      <p>The Beliefs KB provides exhaustive cognition about what the agent could expect as input
data coming from the outside world; as the name suggests, this cognition is managed by means
of proper beliefs that can - in turn - activate proper plans in the agent’s behaviour.</p>
      <p>The Clauses KB is defined by the means of assertions/retraction of nested First Order Logic
(FOL) definite clauses, which are possibly made of composite predicates, and it can be
interrogated providing answer to any query (True or False).</p>
      <p>
        The two KBs represent, somehow, two diferent kinds of human being memory: the so
called procedural memory or implicit memory[
        <xref ref-type="bibr" rid="ref14">14</xref>
        ], made of thoughts directly linked to concrete
and physical entities; the conceptual memory, based on cognitive processes of comparative
evaluation.
      </p>
      <p>As well as in human being, in this architecture the two KBs can interact with each other in a
very reactive decision-making process.</p>
      <sec id="sec-3-1">
        <title>3.1. The Translation Service</title>
        <p>
          This component (left box in Figure 1) is a pipeline of five modules with the task of taking a sound
stream in natural language and translating it in a neo-davidsonian FOL expression inheriting
the shape from the event-based formal representation of Davidson [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ], where for instance the
sentence:
        </p>
        <sec id="sec-3-1-1">
          <title>Brutus stabbed suddenly Caesar in the agora</title>
          <p>(1)
is represented by the following notation:</p>
          <p>∃e stabbed(e, Brutus, Caesar) ∧ suddenly(e) ∧ in(e, agora)
The variable e, which we define davidsonian variable, identifies the verbal action related to
stabbed. In the case a sentence contains more than one verbal phrases we’ll make usage of
indexes for distinguish e from e with  ̸= .</p>
          <p>
            As for the notation used in this work, it does not use ground terms as arguments of the predicates,
in order to permit the sharing of diferent features related to the same term like it follows,
whether we include the adjective evil:
∃e stabbed(e, Brutus(x), Caesar(y)) ∧ evil(x) ∧ suddenly(e) ∧ in(e,
agora(z))
which can also be represented, ungrounding the verbal action arguments, as it follows:
∃e stabbed(e, x, y) ∧ Brutus(x) ∧ Caesar(y) ∧ evil(x) ∧ suddenly(e) ∧
in(e, z) ∧ agora(z)
Furthermore, in the notation used for this work each predicate label is in the form L:POS(t),
where L is a lemmatized word and POS is a Part-of-Speech (POS) tag from the Penn Treebank
tagset[
            <xref ref-type="bibr" rid="ref16">16</xref>
            ].
          </p>
          <p>
            The first module in the pipeline, i.e., the Automatic Speech Recognition [
            <xref ref-type="bibr" rid="ref17 ref18 ref19">17, 18, 19</xref>
            ] (ASR),
allows a machine to understand the user’s speech and convert it into a series of words.
          </p>
          <p>
            The second module is the Dependency Parser, which aims at extracting the semantic
relationships, namely dependencies, between all words in a utterance. In [
            <xref ref-type="bibr" rid="ref20">20</xref>
            ], the authors present a
comparative analysis of ten leading statistical dependency parsers on a multi-genre corpus of
English.
          </p>
          <p>The third module, the Uniquezer, aims at renaming all the entities within each dependency in
order to make them unique. Such a task is mandatory to ensure the correctness of the outcomes
of the next module in the pipeline (the Macro Semantic Table), whose data structures need a
distinct reference to each entity coming from the dependency parser.</p>
          <p>The fourth module, defined as MST Builder, has the purpose to build a novel semantic structure
defined as Macro Semantic Table (MST), which summarizes in a canonical shape all the semantic
features in a sentence, starting from its dependencies, in order to derive FOL expressions.</p>
          <p>Here is a general schema of a MST, referred to the utterance u:</p>
          <p>MST(u) = {ACTIONS, VARLIST, PREPS, BINDS, COMPS, CONDS}
where</p>
          <p>ACTIONS = [(label, e, x, x),...]
VARLIST = [(x1, label1),...(x, label)]</p>
          <p>PREPS = [(label, (e | x), x),...]</p>
          <p>BINDS = [(label, label),...]
COMPS = [(label, label),...]</p>
          <p>
            CONDS = [e1, e2,...]
All tuples inside such lists are populated with variables and labels whose indexing is considered
disjoint among distinct lists, although there are significant relations which will be clarified
later. The MST building takes into account also the analysis done in [
            <xref ref-type="bibr" rid="ref21">21</xref>
            ] about the so-called
slot allocation, which indicates specific policies about entity’s location inside each predicate,
depending on verbal cases. This is because the human mind, in the presence of whatever
utterance, is able to populate implicitly any semantic role (identified by subject/object slots)
taking part in a verbal action, in order to create and interact with a logical model of the utterance.
In this work, by leveraging a step-by-step dependencies analysis, we want to create artificially
such a model, to give an agent the chance to make logical inference on the available knowledge.
All the dependencies used in this paper are part of the ClearNLP[
            <xref ref-type="bibr" rid="ref22">22</xref>
            ] tagset, which is made of
46 distinct entries. For instance, considering the dependencies of 1:
nsubj(stabbed, Brutus)
          </p>
          <p>ROOT(stabbed, stabbed)
advmod(stabbed, suddenly)
dobj(stabbed, Caesar)
prep(stabbed, In)
det(agora, The)
pobj(in, agora)
from the couple nsubj/dobj it is possible to create new a tuple inside ACTIONS as it follows,
taking also in account of variables indexing counting:
and inside VARLIST as well:
(stabbed, e1, x1, x2)
(x1, Brutus)
(x2, Caesar)
Similarly, after an analysis of the couple prep/pobj it is possibile to create further tuples inside
PREPS like it follows:
and inside VARLIST:
The dependency advmod contains informations about the verb (stabbed) is going to modify by
means the adverb suddenly. In light of this, a further tuple inside VARLIST will be created as it
follows:
(in, e1, x3)
(x3, agora)
(e1, suddenly)
amod(Caesar, brave)</p>
          <p>(Caesar, brave)</p>
          <p>As for the BINDS list, it contains tuples with a quality modifiers role: in the case the 1 had
the brave Caesar as object, a further dependency amod will be created as it follow:
In this case, a bind between Caesar and brave will be created inside BINDS as it follows:</p>
          <p>As with BINDS, COMPS contains tuples of terms related to each other, but in this case they
are part of multi-word nouns like Barack Hussein Obama, whose nouns after the first will be
classified as compound by the dependency parser.</p>
          <p>As for the CONDS lists, it contains davidsonian variable whose related predicates subordinate
the remaining others. For instance in the presence of utterances like:
or
if the sun shines strongly, Robert drinks wine
while the sun shines strongly, Helen smiles
in both cases, the dependency mark will give information about subordinate conditions related
to the verb shines, which are mark(shines, If) and mark(shines, while). In those
cases, the davidsonian variable related to shines will populate the list CONDS. In the same way,
in presence of the word when, a subordinate condition might be inferred as well; but since any
adverbs are classified as advmod (as we have seen for suddenly before), it will be considered as
subordinate condition only when its POS is WRB and not RB; the former denotes a wh-adverb,
the latter a qualitative adverb.</p>
          <p>
            The fifth and last module, defined as FOL Builder, aims to build FOL expressions starting
from the MSTs. Since (virtually) all approaches to formal semantics assume the Principle of
Compositionality2, formally formulated by Partee [
            <xref ref-type="bibr" rid="ref23">23</xref>
            ], every semantic representation can be
incrementally built up when constituents are put together during parsing. In light of the above,
it is possible to build FOL expressions straightforwardly starting from a MST, which summarizes
all semantic features extracted during a step-by-step dependencies analysis.
For the rest of the paper, the labels inside the MST tuples will be in the form of lemma:POS.
Then, for instance, instead of stabbed we’ll have stab:VBD, where stab is the lemmatization
of stabbed and VBD is the POS representing a past tense.
          </p>
          <p>For each tuple (var, lemma:POS) in VARLIST the following predicate will be created:
lemma:POS(var)
which represents a noun, such as tiger:NN(x1) or Robert:NNP(x1)3. var can also be a
davidsonian variable when POS has the value of RB. In such cases, the tuples represent adverbs,
such as Hardly:RB(e1) or Slowly:RB(e2).</p>
          <p>For each tuple (lemma:POS, dav, subj, obj) in ACTIONS, the following predicate will
be created:</p>
          <p>lemma:POS(dav, subj, obj)
representing a verbal action, such as be:VBZ(e1, x1, x2) or shine:VBZ(e2, x3, x4).
For each tuple (lemma:POS, dav/var, obj) in PREPS the following predicate will be
created:</p>
          <p>lemma:POS(dav/var, obj)
where dav/var is a variable either in a tuple of ACTIONS or of VARLIST, respectively, while
obj is a variable in a tuple of VARLIST. Such predicates represent verbal/noun prepositions.
For each tuple (lemma:POS1,lemma:POS2) in COMPS, whose first entity lemma:POS1 is in a
tuple of VARLIST, a predicate will be created as follows:</p>
          <p>lemma:POS2(var)
where var is the variable of a the tuple in VARLIST with lemma:POS1 as second entity. In case
of multi-word nouns, each further noun over the first of them in VARLIST will be encoded
2“The meaning of a whole is a function of the meanings of the parts and of the way they are syntactically
combined.”
3Without considering entities enumeration.
within COMPS.</p>
          <p>As for CONDS, its usage will be explained next with an example. Let the sentence in exam be:</p>
        </sec>
        <sec id="sec-3-1-2">
          <title>When the sun shines strongly, Robert is happy</title>
          <p>(2)
the related MST is:</p>
          <p>ACTIONS = [(shine01:VBZ, e1, x1, x2),</p>
          <p>be01:VBZ(e2, x3, x4)]
VARLIST = [(x1, sun01:NN), (x2, ?), (x3, Robert01:NNP), (x4,
happy01:JJ)]</p>
          <p>CONDS = [e1]
It has to be noticed the numeration of the entities within each list, as efect of the Uniquezer
processing before the MST building. As final outcome we’ll have an implication like the
following:
shine01:VBZ(e1, x1, _) ∧ sun01:NN(x1) =⇒ be01:VBZ(e2, x3, x4) ∧</p>
          <p>Robert01:NNP(x3) ∧ happy01:JJ(4)</p>
        </sec>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. The Reactive Reasoner</title>
        <p>As already mentioned, this component (central box in Figure 1) has the task of letting other
modules communicate with each other; it also includes additional modules such as the
SpeechTo-Text (SST) Front-End, IoT Parsers (Direct Command Parser and Routine Parser), Sensor
Instances, and Definite Clauses Builder. The Reactive Reasoner contains also the Beliefs KB,
which supports both Reactive and Cognitive reasoning.</p>
        <p>
          The core of this component processing is managed by the Belief-Desire-Intention Framework
Phidias[
          <xref ref-type="bibr" rid="ref24">24</xref>
          ], which gives Python programs the ability to perform logic-based reasoning (in
Prolog style) and lets developers write reactive procedures, i.e., pieces of program that can
promptly respond to environment events.
        </p>
        <p>The agent’s first interaction with the outer world happens through the STT Front-End, which
is made of production rules reacting on the basis of specific words asserted by an Instance
Sensor; the latter, being instance of the superclass Sensor provided by Phidias, will assert a
belief called STT(X) with X as the recognized utterance, after the sound stream is acquired by
the microphone and translated in text by means of the ASR.</p>
        <p>The Direct Command Parser has the task of combining FOL expressions predicates with
common variables coming from the Translation Services, via a production rules system. The
ifnal outcome of such rules is a belief called INTENT, which might trigger another rule in the
Smart Environment Interface. A similar behaviour is reserved to the Routine Parser, when
subordinating conditions within an IoT command are detected; it produces two types of beliefs:
ROUTINE and COND, linked together by a unique code. The belief ROUTINE is a sort of pending
INTENT, which cannot match any production rule and execute its plan until the content of its
related COND meets those of another belief asserted by a Sensor Instance and called SENSOR.
Then, the ROUTINE belief will be turned into INTENT and get ready for the execution as direct
command, as shown in lines 2, 3, 5, 7, 8 of Listing 1 in the Appendix</p>
        <p>The Definite Clauses Builder is responsible of combining FOL expression predicates with
common variables, through a production rules system, in order to produce nested definite
clauses. Considering the 2 and its related FOL expression producted by the Translation Service,
the production rule system of the Definite Clauses Builder, taking in account of the POS of each
predicate, will produce the following nested definite clause:
shine01:VBZ(sun01:NN(x1), _) =⇒ be01:VBZ(Robert01:NNP(x3),
happy01:JJ(4))
The rationale behind such a notation choice is explained next: a definite clause is either atomic
or an implication whose antecedent is a conjunction of positive literals and whose consequent
is a single positive literal. Because of such restrictions, in order to make MST derived clauses
suitable for doing inference with the Backward-Chaining algorithm (which works only with
KB made of definite clauses), we must be able to incapsulate all their informations properly.
The strategy followed is to create composite terms, taking into account of the POS tags and
applying the following hierarchy to every noun expression as it follows:</p>
        <p>IN(JJ(NN(NNP(x))), t)
(3)
where IN is a preposition label, JJ an adjective label, NP and NNP are noun and proper noun
labels, x is a bound variable and t a predicate.</p>
        <p>As for the verbal actions, the nesting hierarchy will be the following:</p>
        <p>ADV(IN(VB(t1, t2), t3))
where ADV is an adverb label, IN a preposition label, VB a verb label, and t1, t2, t3 are predicates;
in the case of intransitive or imperative verb, instead of respectively t2 or t1, the arguments of
VB will be left void. As we can see, a preposition might be related either to a noun or a verb.</p>
      </sec>
      <sec id="sec-3-3">
        <title>3.3. The Smart Environment Interface</title>
        <p>
          This component (upper right box in Fig.1) provides a bidirectional interaction between the
architecture and the outer world. In Listing 1 in the Appendix, a simple example is shown, where
a production rules system is used as reactive tool to trigger proper plans in the presence of
specific asserted beliefs. In [
          <xref ref-type="bibr" rid="ref25">25</xref>
          ] we have shown the efectiveness of this approach by leveraging
the Phidias predecessor Profeta[
          <xref ref-type="bibr" rid="ref26">26</xref>
          ], even with a shallower analysis of the semantic dependecies,
as well as an operations encoding via WordNet[
          <xref ref-type="bibr" rid="ref27">27</xref>
          ] in order to make the operating agent
multi-language and multi-synonimous.
        </p>
        <p>Such an interface includes a production rules system containing diferent types of entities
definitions and operation codes involving the entities themself, which trigger specific procedures
containing high level language (e.g., lines 11 and 12 in Listing 1 in the Appendix). The latter
should contain all required functions for driving each device in order to get the desired behaviour,
whose implementation in this work is left to the developer. Each production rule contains
also subordinating conditions defined as Active Beliefs: lemma_in_syn(X, S) checks the
membership of the lemma X to the synset S, to make the rule multi-language and
multisynonimous (after having defined the entities depending on the language); while, the Active
Belief eval_cls(Y) lets Belief KB and Clauses KB interact with each other in a very
decisionmaking process, where the agent decides either to execute or not the related plan within the
square brackets, accordingly to the reasoning of the query Y; the latter in line 12-13 of Listing 1
in the Appendix is the representation of the sentence an inhabitant is at home.</p>
        <p>Finally, this module contains also production rules to change routines into direct commands
according to the presence of specific belief related to conditionals, which might be asserted or
not by some Sensor Instance (see lines 2, 3, 5, 8, 9 of Listing 1 in the Appendix).</p>
      </sec>
      <sec id="sec-3-4">
        <title>3.4. The Cognitive Reasoner</title>
        <p>This component (bottom right box in Figure 1) allows an agent to assert/query the Clauses KB
with nested definite clauses, where each predicate argument can be another predicate and so on,
built by the Definite Clauses Builder module (within the Reactive Reasoner).</p>
        <p>
          Beyond the nominal FOL reasoning with the known Backward-Chaining algorithm, this
module exploits also another class of logical axioms, the so-called assignment rules. We refer
to a class of rules of the type "P is-a Q" where P is a predicate whose variable travels across
one hand-side to another, with respect to the implication symbol. For example, if we want to
express the concept: Robert is a man, we can use the following closed formula:
∀x Robert(x) =⇒ Man(x)
(4)
but before that, we must consider a premise: the introduction of such rules in a KB can be
possible only by shifting all its predicates from a strictly semantic domain to a pure conceptual
one, because in a semantic domain we have just the knowledge of morphological relationships
between words given by their syntactic properties. Basically, we need a medium to give
additional meaning to our predicates, which is provided by WordNet [
          <xref ref-type="bibr" rid="ref27">27</xref>
          ]. This allows us to
make logical reasoning in a conceptual space thanks to the following functions:
 :  →−
 ( ) :  →−

(5)
        </p>
        <p>F is the Interpreter Function between the space of all semantic predicates which can be yield
by the MST sets and the space of all conceptual predicates P ; it is not injective, because a
single semantic predicate might have multiple corrispondences in the codomain, one for each
diferent synset containing the lemma in exam. F ( ) is between domain and codomain
of all predicate’s argument of F , which have equal arity. For instance, considering the FOL
expression of (4):</p>
        <p>be:VBZ(e1, x1, 2) ∧ Robert:NNP(x1) ∧ man:NN(2)
After an analysis of be, we find the lemma within the WordNet synset encoded by be.v.01
and defined by the gloss: have the quality of being something. This is the medium we need for
the domain shifting which gives a common sense meaning to our predicates.</p>
        <p>In light of above, in the new conceptual domain given by (5), the same expression can be
rewritten as:
be_VBZ(d1, y1, y2) ∧ Robert_NNP(y1) ∧ man_NN(y2)
where be_VBZ is fixed on the value which identify y 1 with y2, Robert_NNP(x) means that x
identify Robert, and man_NN(x) means that x identify a man.</p>
        <p>Considering the meaning of be_VBZ, it does make sense also to rewrite the formula as:
∀y Robert_NNP(y) =⇒ man_NN(y)
whrere y is a bound variable like x in (4).</p>
        <p>Having such a rule in a KB means that we can implicitly admit additional clauses having
man_NN(y) as argument instead of Robert_NNP(y).</p>
        <p>The same expression, of course, in a conceptual domain can also be rewritten as a composite
fact, where Robert_NNP(x) becomes argument of man_NN(x) as it follows:
(6)
man_NN(Robert_NNP(y))
(7)
which agrees with the hierarchy of 3 as outcome of the Definite Clauses Builder.</p>
        <p>
          As claimed in [
          <xref ref-type="bibr" rid="ref28">28</xref>
          ], not every KB can be converted into a set of definite clauses, because of
the single-positive-literal restriction, but many KB can, like the one related to this work for the
following reasons:
1. No clauses made of one single literal will ever be negative, due to the closed world
assumption. Negations, initially treated like whatever adverb, when detected and related
to ROOT dependency are considered as polarity inverter of verbal phrases; so, in this
case, the assert will be turned into retract.
2. When the right hand-side of a clause is made by more than one literals, it is easy to
demonstrate that, by applying the implication elimination rule and the principle of
distributivity of ∨ over ∧, a non-definite clause can be splitted into n definite clauses
(where n is the number of consequent literals).
        </p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Nested Reasoning and Clause Conceptual Generalizations</title>
      <p>The aim of the Cognitive Reasoner is to query a KB made of nested clauses that are also made
closer to any possible related query, thanks to an appropriate pre-processing at assertion-time.
Such a pre-processing, which creates a runtime expansion of the KB for every asserted clause,
takes advantage of assignment rules for derivation of new knowledge.</p>
      <p>The Backward-Chaining algorithm, as is, in presence of clauses where argument manipulation
is required, might not be efective. To achieve such a goal, when required clauses are not present
in the KB but deductible by proper arguments substitutions, the clauses evaluations at
reasoningtime can be quite heavy and not feasible in term of complexity, because the process requires
unifications at every single step. Instead, we will show how, by expanding properly the KB at
assertion-time, the reasoning itself can be achieved acceptably. In order to obtain such a goal,
CASPAR extends the radius of the nominal Backward-Chaining through the expansion of the
Clauses KB with new knowledge generated starting from arguments substitutions on copies of
specific clauses already asserted before.</p>
      <p>For instance, let us consider a KB made at most of one-level4 composite predicates as follows:
4Supposing a zero-level composite predicate be P(x).
P1(G1(x1)) ∧ P2(G2(x2)) =⇒ P3(F3(x3))</p>
      <p>P1(F1(x1))</p>
      <p>P2(F2(x2))
F1(x) =⇒ G1(x)
F2(x) =⇒ G2(x)</p>
      <p>H3(x) =⇒ F3(x)</p>
      <p>Querying such a KB with P3(H3(x)), for instance, using the Backward-Chaining
algorithm, it will return False because there are neither any unifiable literals present nor as
consequent of a clause. Instead, by exploiting H3(x) =⇒ F3(x), we can also query the KB with
P3(F3(x)) which is present as consequent of the first clause and it is surely satisfied together
with P3(H3(x)): that’s what we define as Nested Reasoning.</p>
      <p>Now, to continue the reasoning process, we should check about the premises of such clause,
which is made of the conjunction of two literals, namely P1(G1(x1)) and P2(G2(x2)). The
latters, although not initially asserted, can be obtained starting by argument substitution on
copies of other clauses from the same KB. Such a process is achieved by implicitly asserting the
following clauses together with P1(F1(x1)) and P2(F2(x2)):</p>
      <p>P1(F1(x1)) =⇒ P1(G1(x1))</p>
      <p>P1(F1(x1)) =⇒ P2(G2(x2))</p>
      <p>Since we cannot know in advance what a future successful reasoning requires, considering
all possible nesting levels, along with the previous clauses also the so-called Clause Conceptual
Generalizations will be asserted:</p>
      <p>P1(G1(x1)) ∧ P2(G2(x2)) =⇒ F3(x3)</p>
      <p>F1(x1)</p>
      <p>F2(x1)
where the antecedent of the implication is unchanged to hold the quality of the rule, while
F1(x1), F2(x1), F3(x3), as satisfiability contributors of respectively P1(F1(x1)), P2(F2(x2)),
P3(F3(x3)), are assumed asserted together with the latters. In other terms, the predicates: P1,
P2, P3 can be considered as modifiers of respectively F1, F2, F3.</p>
      <p>A generalization considering also the antecedent of the implicative formula is possible only
through a weaker assertion of the entire formula itself, by changing =⇒ with ∧ as it follows:
∃ x1, x2, x3 | G1(x1) ∧ G2(x2) ∧ F3(x3)
which is not admitted as definite clause, being not a single positive literal. In any case, the
mutual existence of x1, x2, x3 which satisfies such a conjunction, is already subsumed by the
implication.</p>
      <p>After such a theoretic premise, let’s make a more practical example considering the following
natural language utterance:</p>
      <p>When the sun shines hard, Barbara drinks slowly a fresh lemonade
The corresponding definite clause will be (omitting the POS tags for the sake of readability):
Hard(Shine(Sun(x1), __)) =⇒ Slowly(Drink(Barbara(x3),</p>
      <p>Fresh(Lemonade(x4))))
Considering as modifiers adjectives, adverbs and prepositions, following the schema in Table 1:
all the clauses generalization (corresponding to the first three rows of the table, while the forth
is the initial clause) can be asserted as it follows:</p>
      <p>Hard(Shine(Sun(x1), __)) =⇒ Drink(Barbara(x3), Lemonade(x4)))
Hard(Shine(Sun(x1), __)) =⇒ Slowly(Drink(Barbara(x3), Lemonade(x4)))
Hard(Shine(Sun(x1), __)) =⇒ Drink(Barbara(x3), Fresh(Lemonade(x4)))
As said before, the antecedent (whether existing) of all generalizations remains unchanged
to hold the quality of the triggering condition, while the consequent shape will range on all
possible variations of its modifiers, which will be 2 with  as number of modifiers. Here the
adverb Hard, being common part of all the antecedents composition, is always Applied.</p>
      <p>
        Although in such a case the number of generalizations is equal to 4, in general it might be quite
higher: it has been observed, after an analysis of more text corpus from the Stanford Question
Answering Dataset[
        <xref ref-type="bibr" rid="ref29">29</xref>
        ], that the average number of modifiers in a single non-implicative
utterance is equal to 6. In such cases the number of generalizations would be equal to 64, but
greater numbers of modifiers would make the parsing less tractable, considering also arguments
analysis for possible substitutions. In order to limit such a phenomenon, depending on the
domain, CASPAR gives the chance to limit the number of generalizations by a specific parameter
which modifies the policies of selective inclusion/exclusion of modifiers categories (adjectives,
adverbs or prepositions).
      </p>
      <p>
        In such a scenario, of course, the more the combinatorial possibilities, the more the number
of clauses in the Clauses KB. It will appear clear, for the reader, that this approach sacrifices
space for a lighter reasoning, but we rely on a three distinct points in favor of our choice:
1. An eficient indexing policy of the Clauses KB, for a fast retriving of any clause.
2. The usage of the class Sensor of Phidias for every clauses assertion, which works
asynchronously with respect to the main production rules system, will make the agent
immediately available after every interrogation without any latency, while additional clauses
will be asserted in background.
3. We point to keep the Clauses KB as small as possible, in order to limit the combinatorial
chances. In this paper we assume the assignment rules properly chosen among the most
likely which can get the query closer to a proper candidate. As future works, a reasonable
balancing between two distinct Clauses KB working on diferent levels might be a good
solution: in the lower level (long-term memory) only clauses pertinent with the query will
be searched, then put in the higher one (short-term memory) for attempting a successful
reasoning. Similar approaches have been used with interesting outcomes in some of the
widespread Cognitive Architectures[
        <xref ref-type="bibr" rid="ref8">8</xref>
        ].
      </p>
      <p>
        As result evaluation, we consider a slightly rephrased KB (Colonel West) treated in [
        <xref ref-type="bibr" rid="ref28">28</xref>
        ],
showing how CASPAR is able to make a successful reasoning for a question requiring a
nontrivial deduction. Although this architecture is designed to work as vocal assistant, one can
alike verify the reasoning by asserting manually the same belief STT asserted by the Sensor
Instance as shown in Listing 2 in the Appendix. In light of this, after each assertion (lines 1, 8,
13, 18, 30) the new asserted clauses are shown, and it appears clear how the agent expands the
Clauses KB considering generalizations and argument substitutions. After the query is given
(line 45), is shown how the nominal Backward-Chaining algorithm is not enough for achieving
a successful reasoning, while it happens using the Nested Reasoning.
      </p>
      <p>In Section 3.3 we have also shown how a direct command or routine can be subordinated by
a clause. Although in the example (see lines 12-13 of Listing 1 in the Appendix) the production
rule contains the representation of An inhabitant is at home, even a clause involving the Nested
Reasoning might trigger such a rule; for instance, a simple toy scenario could include a facial
recognizer among the domotic devices, which obtains information about known/unknown faces
when someone is detected in the environment. Such a recognition could generate a clause
representing (for instance) Robert is at home, which, combined with another clause representing
Robert is an inhabitant, will produce the representation of An inhabitant is at home; the latter
will trigger the production rule (related to a direct command or routine) that will turn of the
alarm in the garage. This will not happen whether a thief or a domestic animal is detected, thus
it provides indeed a valid example about how Beliefs KB and Clauses KB interact with each
other, in a non-trivial process of deduction.</p>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusions and Future Work</title>
      <p>In this paper, we have presented the design of a cognitive architecture called CASPAR able to
implements agents capable of both reactive and cognitive reasoning. Nevertheless we want
to mark a way towards a comprehensive strategy to make deduction on Knowledge Bases
whose content is parsed directly from natural language. This architecture works by using a
Knowledge Base divided into two distinct parts (Beliefs KB and Clauses KB), which can also
interact with each other in decision-making processes. In particular, as long as the Clauses KB
increases, its cognitive features improve due to an implicit and native capability of inferring
combinatorial rules from its own Knowledge Base. Thanks to the Nested Reasoning and
the Clause Conceptual Generalizations, CASPAR is able to transcend the limit of the known
Backward-Chaining algorithm due to the nested semantic notation; the latter is as highly
descriptive as compact. Furthermore, agents based on such an architecture are able to parse
complex direct IoT commands and routines, letting the user customize with ease his own Smart
Environment Interface and Sensors, with whatever Speech-to-Text engine.</p>
      <p>As future works, we want to test CASPAR capabilites with other languages than english
and evaluate other integrations, like Abductive Reasoning and Argumentations. Even chatbots
applications can take advantage of this architecture’s features.</p>
      <p>Finally, we want to exploit Phidias multiagent features by implementing standardized
communication protocols between agents and exploit other ontologies as well.</p>
      <p>In this appendix, a simple instance of Smart Environment Interface is provided (Listing 1)
together with an example of how Clauses Knowledge Base changes, after assertions (Listing 2).</p>
      <p>Listing 1: A simple instance of Smart Environment Interface
1 &gt; +STT("Nono is an hostile nation")
2
3 Be(Nono(x1), Nation(x2))
4 Be(Nono(x1), Hostile(Nation(x2)))
5 Nono(x) ==&gt; Nation(x)
6 Nono(x) ==&gt; Hostile(Nation(x))
7
8 &gt; +STT("Colonel West is American")
9
10 Be(Colonel_West(x1), American(x2))
11 Colonel_West(x) ==&gt; American(x))
12
13 &gt; +STT("missiles are weapons")
14
15 Be(Missile(x1), Weapon(x2))
16 Missile(x) ==&gt; Weapon(x)
17
18 &gt; +STT("Colonel West sells missiles to Nono")
19
20 Sell(Colonel_West(x1), Missile(x2)) ==&gt; Sell(American(v_0), Missile(x4))
21 Sell(Colonel_West(x1), Missile(x2)) ==&gt; Sell(American(x3), Weapon(v_1))
22 Sell(Colonel_West(x1), Missile(x2)) ==&gt; Sell(Colonel_West(x1), Weapon(v_2))
23 Sell(Colonel_West(x1), Missile(x2))
24 To(Sell(Colonel_West(x1), Missile(x2)), Nono(x3)) ==&gt; To(Sell(Colonel_West(x1), Missile(x2)), Nation(v_4))
25 To(Sell(Colonel_West(x1), Missile(x2)), Nono(x3)) ==&gt; To(Sell(American(v_5), Missile(v_6)), Nation(v_4))
26 To(Sell(Colonel_West(x1), Missile(x2)), Nono(x3)) ==&gt; To(Sell(American(v_7), Weapon(v_8)), Nation(v_4))
27 To(Sell(Colonel_West(x1), Missile(x2)), Nono(x3)) ==&gt; To(Sell(Colonel_West(v_9), Weapon(v_10)), Nation(v_4))
28 To(Sell(Colonel_West(x1), Missile(x2)), Nono(x3)) ==&gt; To(Sell(Colonel_West(x1), Missile(x2)), Hostile(Nation(v_11))
29 To(Sell(Colonel_West(x1), Missile(x2)), Nono(x3)) ==&gt; To(Sell(American(v_12), Missile(v_13)), Hostile(Nation(v_11))
30 To(Sell(Colonel_West(x1), Missile(x2)), Nono(x3)) ==&gt; To(Sell(American(v_14), Weapon(v_15)), Hostile(Nation(v_11))
31 To(Sell(Colonel_West(x1), Missile(x2)), Nono(x3)) ==&gt; To(Sell(Colonel_West(v_16), Weapon(v_17)), Hostile(Nation(v_11))
32 To(Sell(Colonel_West(x1), Missile(x2)), Nono(x3)) ==&gt; To(Sell(American(v_18), Missile(v_19)), Nono(x3))
33 To(Sell(Colonel_West(x1), Missile(x2)), Nono(x3)) ==&gt; To(Sell(American(v_22), Weapon(v_23)), Nono(x3))
34 To(Sell(Colonel_West(x1), Missile(x2)), Nono(x3)) ==&gt; To(Sell(Colonel_West(v_26), Weapon(v_27)), Nono(x3))
35 To(Sell(Colonel_West(x1), Missile(x2)), Nono(x3))
36
37 &gt;+STT("When an American sells weapons to a hostile nation, that American is a criminal")
38
39 To(Sell(American(x1), Weapon(x2)), Hostile(Nation(x3))) ==&gt; Be(American(x4), Criminal(x5))
40
41 &gt;+STT("reason")
42
43 Waiting for query...
44
45 &gt; +STT("Colonel West is a criminal")
46
47 Reasoning...............
48
49 Query: Be_VBZ(Colonel_West(x1), Criminal(x2))
50
51 ---- NOMINAL REASONING
--52
53 Result: False
54
55 ---- NESTED REASONING
--56
57 Result: {v_211: v_121, v_212: x2, v_272: v_208, v_273: v_209, v_274: v_210, v_358: v_269, v_359: v_270, v_360: v_271}
Listing 2: CASPAR Clauses Knowledge Base changes and reasoning, after assertions</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>V.</given-names>
            <surname>Këpuska</surname>
          </string-name>
          , G. Bohouta,
          <article-title>Next-generation of virtual personal assistants (Microsoft Cortana, Apple Siri, Amazon Alexa and Google Home)</article-title>
          ,
          <source>in: 2018 IEEE 8th Annual Computing and Communication Workshop and Conference (CCWC)</source>
          ,
          <year>2018</year>
          , pp.
          <fpage>99</fpage>
          -
          <lpage>103</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>H.</given-names>
            <surname>Jeon</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H. R.</given-names>
            <surname>Oh</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Hwang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Kim</surname>
          </string-name>
          ,
          <article-title>An Intelligent Dialogue Agent for the IoT Home</article-title>
          , in: AAAI Workshops,
          <year>2016</year>
          . URL: https://www.aaai.org/ocs/index.php/WS/AAAIW16/paper/ view/12596.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>E. V.</given-names>
            <surname>Polyakov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. S.</given-names>
            <surname>Mazhanov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. Y.</given-names>
            <surname>Rolich</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. S.</given-names>
            <surname>Voskov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. V.</given-names>
            <surname>Kachalova</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. V.</given-names>
            <surname>Polyakov</surname>
          </string-name>
          ,
          <article-title>Investigation and development of the intelligent voice assistant for the Internet of Things using machine learning</article-title>
          ,
          <source>in: 2018 Moscow Workshop on Electronic and Networking Technologies (MWENT)</source>
          ,
          <year>2018</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>5</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>M.</given-names>
            <surname>Mehrabani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Bangalore</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Stern</surname>
          </string-name>
          ,
          <article-title>Personalized speech recognition for Internet of Things</article-title>
          ,
          <source>in: 2015 IEEE 2nd World Forum on Internet of Things (WF-IoT)</source>
          ,
          <year>2015</year>
          , pp.
          <fpage>369</fpage>
          -
          <lpage>374</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>R.</given-names>
            <surname>Kar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Haldar</surname>
          </string-name>
          ,
          <article-title>Applying Chatbots to the Internet of Things: Opportunities and Architectural Elements</article-title>
          ,
          <source>International Journal of Advanced Computer Science and Applications</source>
          <volume>7</volume>
          (
          <year>2016</year>
          ). URL: http://dx.doi.org/10.14569/IJACSA.
          <year>2016</year>
          .
          <volume>071119</volume>
          . doi:
          <volume>10</volume>
          .14569/IJACSA.
          <year>2016</year>
          .
          <volume>071119</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>C. J.</given-names>
            <surname>Baby</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F. A.</given-names>
            <surname>Khan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. N.</given-names>
            <surname>Swathi</surname>
          </string-name>
          ,
          <article-title>Home automation using IoT and a chatbot using natural language processing</article-title>
          ,
          <source>in: 2017 Innovations in Power and Advanced Computing Technologies (i-PACT)</source>
          ,
          <year>2017</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>6</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>I.</given-names>
            <surname>Kotseruba</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. K.</given-names>
            <surname>Tsotsos</surname>
          </string-name>
          ,
          <article-title>40 years of cognitive architectures: core cognitive abilities and practical applications</article-title>
          ,
          <source>Artificial Intelligence Review (2018) Rev</source>
          <volume>53</volume>
          ,
          <fpage>17</fpage>
          -
          <lpage>94</lpage>
          (
          <year>2020</year>
          ). doi:https://doi.org/10.1007/s10462-018-9646-y.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>D. F.</given-names>
            <surname>Lucentini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. R.</given-names>
            <surname>Gudwin</surname>
          </string-name>
          ,
          <article-title>A comparison among cognitive architectures: A theoretical analysis</article-title>
          ,
          <source>in: 2015 Annual International Conference on Biologically Inspired Cognitive Architectures</source>
          ,
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>T.</given-names>
            <surname>Giulio</surname>
          </string-name>
          ,
          <article-title>Consciousness as integrated information: A provisional manifesto, The Biological bulletin (</article-title>
          <year>2008</year>
          )
          <volume>215</volume>
          (
          <issue>3</issue>
          ),
          <fpage>216</fpage>
          -
          <lpage>242</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>S.</given-names>
            <surname>Epstein</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Passonneau</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Gordon</surname>
          </string-name>
          , T. Ligorio,
          <article-title>The role of knowledge and certainty in understanding for dialogue</article-title>
          ,
          <source>in: AAAI Fall Symposium Series</source>
          ,
          <year>2011</year>
          . URL: https://www. aaai.org/ocs/index.php/FSS/FSS11/paper/view/4179.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>V.</given-names>
            <surname>Këpuska</surname>
          </string-name>
          , G. Bohouta,
          <article-title>Comparing speech recognition systems (microsoft api, google api</article-title>
          and cmu sphinx),
          <source>Int. Journal of Engineering Research and Application</source>
          Vol.
          <volume>7</volume>
          ,
          <string-name>
            <surname>Issue</surname>
            <given-names>3</given-names>
          </string-name>
          , (
          <issue>Part -2</issue>
          ) (
          <year>March 2017</year>
          )
          <fpage>20</fpage>
          -
          <lpage>24</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>M.</given-names>
            <surname>Dehghani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Tomai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Forbus</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Klenk</surname>
          </string-name>
          ,
          <article-title>An integrated reasoning approach to moral decision-making</article-title>
          ,
          <source>in: Proceedings of the 23rd National Conference on Artificial Intelligence -</source>
          Volume
          <volume>3</volume>
          , AAAI'
          <fpage>08</fpage>
          , AAAI Press,
          <year>2008</year>
          , p.
          <fpage>1280</fpage>
          -
          <lpage>1286</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>M.</given-names>
            <surname>Scheutz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Schermerhorn</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Kramer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Anderson</surname>
          </string-name>
          ,
          <source>First Steps toward Natural Humanlike HRI, Auton. Robots</source>
          <volume>22</volume>
          (
          <year>2007</year>
          )
          <fpage>411</fpage>
          -
          <lpage>423</lpage>
          . URL: https://doi.org/10.1007/s10514-006-9018-3. doi:
          <volume>10</volume>
          .1007/s10514-006-9018-3.
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>D. L.</given-names>
            <surname>Schacter</surname>
          </string-name>
          ,
          <article-title>Implicit memory: history and current status</article-title>
          ,
          <source>Journal of Experimental Psychology: Learning</source>
          , Memory, and Cognition vol.
          <volume>13</volume>
          ,
          <year>1987</year>
          (
          <year>1987</year>
          )
          <fpage>501</fpage>
          -
          <lpage>518</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>D.</given-names>
            <surname>Davidson</surname>
          </string-name>
          ,
          <article-title>The logical form of action sentences, in: The logic of decision and action</article-title>
          , University of Pittsburg Press,
          <year>1967</year>
          , p.
          <fpage>81</fpage>
          -
          <lpage>95</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>L. D.</given-names>
            <surname>Consortium</surname>
          </string-name>
          , Treebank-
          <volume>3</volume>
          ,
          <year>2017</year>
          . URL: https://catalog.ldc.upenn.edu/LDC99T42.
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>X.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Deng</surname>
          </string-name>
          ,
          <article-title>An Overview of Modern Speech Recognitiong</article-title>
          , Microsoft Corporation,
          <year>2009</year>
          , pp.
          <fpage>339</fpage>
          -
          <lpage>344</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>R.</given-names>
            <surname>Rajan</surname>
          </string-name>
          <string-name>
            <surname>Mehla</surname>
          </string-name>
          , Mamta,
          <article-title>Automatic speech recognition: A survey</article-title>
          ,
          <source>International Journal of Advanced Research in Computer Science and Electronics Engineering</source>
          Volume
          <volume>3</volume>
          ,
          <string-name>
            <surname>Issue 1</surname>
          </string-name>
          <source>(January</source>
          <volume>20147</volume>
          )
          <fpage>20</fpage>
          -
          <lpage>24</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <surname>A. D. Saliha</surname>
            <given-names>Benkerzaz</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>Youssef</given-names>
            <surname>Elmir</surname>
          </string-name>
          ,
          <article-title>A study on automatic speech recognition</article-title>
          ,
          <source>Journal of Information Technology Review Volume 10, Number</source>
          <volume>3</volume>
          (
          <year>August 2019</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <surname>A. S. Jinho D. Choi</surname>
          </string-name>
          , Joel Tetreault, It depends:
          <article-title>Dependency parser comparison using a web-based evaluation tool, in: Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th</article-title>
          <source>International Joint Conference on Natural Language Processing</source>
          ,
          <year>2015</year>
          , p.
          <fpage>387</fpage>
          -
          <lpage>396</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>S.</given-names>
            <surname>Anthony</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Patrick</surname>
          </string-name>
          ,
          <article-title>Dependency based logical form transformations</article-title>
          ,
          <source>in: SENSEVAL-3: Third International Workshop on the Evaluation of Systems for the Semantic Analysis of Text</source>
          ,
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <surname>ClearNLP</surname>
          </string-name>
          , Clear nlp tagset,
          <year>2015</year>
          . URL: https://github.com/clir/clearnlp-guidelines.
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <given-names>B. H.</given-names>
            <surname>Partee</surname>
          </string-name>
          ,
          <source>Lexical Semantics and Compositionality</source>
          , volume
          <volume>1</volume>
          ,
          <string-name>
            <surname>Lila</surname>
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Gleitman</surname>
          </string-name>
          and Mark Liberman editors,
          <year>1995</year>
          , p.
          <fpage>311</fpage>
          -
          <lpage>360</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <surname>C. S. Fabio D'Urso</surname>
          </string-name>
          , Carmelo Fabio Longo,
          <article-title>Programming intelligent iot systems with a python-based declarative tool</article-title>
          ,
          <source>in: The Workshops of the 18th International Conference of the Italian Association for Artificial Intelligence</source>
          ,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [25]
          <string-name>
            <given-names>C. F.</given-names>
            <surname>Longo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Santoro</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F. F.</given-names>
            <surname>Santoro</surname>
          </string-name>
          ,
          <article-title>Meaning Extraction in a Domotic Assistant Agent Interacting by means of Natural Language</article-title>
          , in: 28th IEEE International Conference on Enabling Technologies:
          <article-title>Infrastructure for Collaborative Enterprises</article-title>
          , IEEE,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [26]
          <string-name>
            <given-names>L.</given-names>
            <surname>Fichera</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Messina</surname>
          </string-name>
          , G. Pappalardo,
          <string-name>
            <given-names>C.</given-names>
            <surname>Santoro</surname>
          </string-name>
          ,
          <article-title>A python framework for programming autonomous robots using a declarative approach</article-title>
          ,
          <source>Sci. Comput</source>
          . Program.
          <volume>139</volume>
          (
          <year>2017</year>
          )
          <fpage>36</fpage>
          -
          <lpage>55</lpage>
          . URL: https://doi.org/10.1016/j.scico.
          <year>2017</year>
          .
          <volume>01</volume>
          .003. doi:
          <volume>10</volume>
          .1016/j.scico.
          <year>2017</year>
          .
          <volume>01</volume>
          .003.
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          [27]
          <string-name>
            <given-names>G. A.</given-names>
            <surname>Miller</surname>
          </string-name>
          ,
          <article-title>Wordnet: A lexical database for english</article-title>
          ,
          <source>in: Communications of the ACM</source>
          Vol.
          <volume>38</volume>
          , No.
          <volume>11</volume>
          :
          <fpage>39</fpage>
          -
          <lpage>41</lpage>
          ,
          <year>1995</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          [28]
          <string-name>
            <given-names>P. N. Stuart J.</given-names>
            <surname>Russel</surname>
          </string-name>
          , Artificial Intelligence:
          <string-name>
            <given-names>A Modern</given-names>
            <surname>Approach</surname>
          </string-name>
          , Pearson,
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          [29] stanford,
          <source>The stanford question answering dataset squad2.0</source>
          ,
          <year>2018</year>
          . URL: https://rajpurkar. github.io/SQuAD-explorer/.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>