<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Hybrid Machine Learning/Knowledge Base Systems Learning through Natural Language Dialogue with Deep Learning Models</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Sergei Nirenburg</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Nikhil Krishnaswamy</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Marjorie McShane</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Colorado State University</institution>
          ,
          <addr-line>1100 Center Avenue Mall, Fort Collins, CO 80523</addr-line>
          ,
          <country country="US">USA</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Rensselaer Polytechnic Institute</institution>
          ,
          <addr-line>110 8th St. Troy, NY 12180</addr-line>
          ,
          <country country="US">USA</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Neurosymbolic approaches to AI typically involve attempts to reincorporate the structure and speed of symbolic reasoning into the flexible representations of deep learning. “Knowledge,” in this understanding, is typically represented in a structured ontology or knowledge base that relies on human expertise and efort to construct. In this paper, we present a vision for “language-endowed intelligent agents,” a type of lifelong learner which begins with a hand-crafted knowledge base and a deep language understander and increases it over the course of its life through dialogue and interaction with both humans and other AI systems-generative large language learning models in particular. We discuss the requirements for such a system, present evidence toward the feasibility of the approach, and conclude with future challenges and research directions.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Language-endowed intelligent agents (LEIAs)</kwd>
        <kwd>Learning through dialogue between DL model and LEIA</kwd>
        <kwd>Neurosymbolic AI</kwd>
        <kwd>Lifelong learning</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        The emergence of deep learning made AI today’s technology of tomorrow in the eyes of
developers, potential users and the general public. Deep learning (DL) models can uncover
patterns implicit in enormous collections of data and demonstrate impressive performance on a
slew of text processing tasks due largely to improvements in neural language modeling using
versions of transformer architectures [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], as implemented in BERT [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], GPT [
        <xref ref-type="bibr" rid="ref3 ref4">3, 4</xref>
        ] and other
model families. Still, they are subject to a number of conceptual and practical limitations related
to resource consumption, performance in adversarial settings, and ability to “understand” in
a colloquial sense (e.g., [
        <xref ref-type="bibr" rid="ref5 ref6 ref7">5, 6, 7</xref>
        ]). They are simply very eficient methods of mining huge text
repositories to find the most probable next word to follow a given word sequence and do not
actually understand their inputs or outputs, or the nature of their own processing. This is why
they fail to explain what they are doing and why.1 This realization – as well as well-known
dificulties that DL models experience in adversarial settings [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] and their inability to reason [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]
engendered several directions of work to overcome these deficiencies. One example of such a
program of work is explainable AI (XAI) [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ], a large-scale research program whose aim is to
develop models that attempt to explain the decision-making behind the output of DL models by
recreating their results using specially developed explanation-oriented models that are based
on human-interpretable features.
      </p>
      <p>
        A general methodology of overcoming limitations of a particular approach is to use it together
with another approach in a hybrid system designed to carry out a particular task. A task-oriented
methodology involving coexistence of very diferent processing approaches in a single AI system
is more dificult to implement but promises to yield better results than a system that exclusively
seeks to exploit a single approach. For example, all pre-semantic processing in our OntoSem
language understander [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] is carried out by ML-based subsystems (currently based on Spacy
technology2). A number of recent proposals have been put forward to combine neural networks
and symbolic reasoning. These “neurosymbolic” approaches to AI have so far been understood
primarily as a method of using the content, structure and eficiency of symbolic reasoning to
boost the performance of DL models [
        <xref ref-type="bibr" rid="ref12 ref13 ref14">12, 13, 14</xref>
        ]. We describe a program of work that also seeks
to integrate symbolic and neural net processing. However, the objective of the program we
propose is in some sense the inverse of that pursued by the current neurosymbolic approaches.
      </p>
      <p>
        We propose to use flexible representations, big data orientation and analogical reasoning
of DL to overcome the notorious “AI knowledge bottleneck.” The complexities and the sheer
expense of acquiring knowledge for AI systems have been amply demonstrated and documented
[
        <xref ref-type="bibr" rid="ref15">15</xref>
        ].3 Lowering this cost through automation is an attractive option. In what follows, we first
describe the infrastructure we developed and demonstrated to facilitate conceptual learning
by AI agents capable of understanding the meaning of language inputs. Next, we describe
our initial experiments on using DL models to enhance the eficiency of the approach. Finally,
we present how we intend to extend the use of DL models in our approach to lifelong agent
learning.
      </p>
    </sec>
    <sec id="sec-2">
      <title>2. The bootstrapping infrastructure for overcoming the knowledge bottleneck.</title>
      <p>
        AI agents capable of human-level understanding, reasoning, decision-making and action must
rely on vast amounts of knowledge about the world, typically in the form of an ontological
model. Additional knowledge is necessary to support the agent’s ability to interpret percepts
(language, images, etc.) in terms of its ontology. Acquiring such knowledge is notoriously
dificult. One option is to obviate the need for knowledge (this was the initial hope of empirical
methods). The option we are pursuing is to make knowledge acquisition less expensive by
1While not yet the topic of substantial academic research due to its novelty, many researchers and some popular
media have raised similar concerns about newer models like OpenAI’s ChatGPT.
2https://spacy.io/
3Much less publicized is the fact that the cost of human participation in developing ML models is not at all
negligible [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ].
Control and data flow
Knowledge support
for specific tasks
      </p>
      <p>NL Text and</p>
      <p>Image
Interpretation</p>
      <p>MRs</p>
      <p>Attention,
Reasoning and
Decision-Making</p>
      <p>MRs</p>
      <p>Spoken or
Physical Action
Specification</p>
      <p>Rendering</p>
      <p>Lexicon Ontology
Knowledge resources</p>
      <p>Opticon</p>
      <p>Remembered</p>
      <p>
        Concept
Instances
progressively automating it. The architecture of the system (we call such systems
“languageendowed intelligent agents,” or LEIAs) onto which we want to “graft” this learning capability is
illustrated in Figure 1. We develop LEIAs to serve as members of human-AI teams in critical
applications where humans must fully trust LEIAs’ analyses, conclusions and recommendations,
cf. e.g., [
        <xref ref-type="bibr" rid="ref17 ref18">17, 18</xref>
        ]. As already mentioned, LEIAs include both ML-based and knowledge-based
processing modules [
        <xref ref-type="bibr" rid="ref19 ref20">19, 20</xref>
        ].
      </p>
      <p>
        The prerequisites to bootstrap the learning capability of LEIAs include the availability of:
• a natural language (NL) understanding system (OntoSem) capable of extracting and
representing ontologically grounded meanings of language inputs (text meaning
representations, or TMRs) [
        <xref ref-type="bibr" rid="ref11 ref20 ref21 ref22">20, 11, 21, 22</xref>
        ];
• an image interpretation system that ontologically interprets results of computer vision,
yielding visual meaning representations (VMRs) [
        <xref ref-type="bibr" rid="ref23 ref24 ref25 ref26">23, 24, 25, 26</xref>
        ];
• a conceptual learning system that takes TMRs and VMRs as inputs and augments the
agent’s knowledge resources – an ontological world model, an episodic memory of concept
exemplars, a lexicon supporting language interpretation and its counterpart supporting
visual perception, the opticon [
        <xref ref-type="bibr" rid="ref27 ref28">27, 28</xref>
        ].
      </p>
      <p>
        All three of the above systems used by LEIAs rely on knowledge support – a lexicon for
the language analyzer, an opticon (roughly, a set of image-to-concept pairings) for the image
interpreter, and an ontology basis for all three. At present, we bootstrap the system with an
ontology of ∼ 160K RDF triples, an English lexicon of ∼ 30K word senses and a small opticon [
        <xref ref-type="bibr" rid="ref29">29</xref>
        ].
      </p>
      <p>The basic idea behind our approach is to exploit the LEIAs’ perception interpretation
capabilities that are already used in their routine operational configurations and augment their
existing reasoning repertoire to include a dedicated learning system that will generate novel,
and enhance/modify existing, knowledge elements. This will in turn enhance the coverage
and precision of the LEIA’s task-oriented perception interpretation and reasoning in a lifelong
virtuous circle.</p>
      <p>
        Knowledge acquisition has been a central concern in LEIA development for a long time [
        <xref ref-type="bibr" rid="ref30 ref31 ref32 ref33">30,
31, 32, 33</xref>
        ]. Early eforts concentrated on data analytics support, and ergonomics of manual
acquisition [
        <xref ref-type="bibr" rid="ref34">34</xref>
        ].
      </p>
      <p>
        Once the minimal bootstrapping prerequisites for learning through language understanding
have been developed, we configured several proof-of-concept systems to demonstrate ontology
and lexicon learning by reading [35] and through dialogue with a human user [
        <xref ref-type="bibr" rid="ref28">28</xref>
        ]. We
implemented both a deliberate learning mode, where the LEIA knows its inputs are intended for this
purpose, and an opportunistic mode, where learning is co-exists with task-oriented operation.
In this mode, if learning is necessary for performing a task at hand, then it is scheduled right
away, while the task operation is suspended. If the LEIA derives an actionable TMR for a
task-related input even if it cannot interpret the input completely [36], then the learning is
postponed until downtime. Figure 2 illustrates a version of LEIA architecture incorporating
“opportunistic” learning triggered when LEIAs encounter dificulties in interpreting sensory
inputs during routine operation. These dificulties are typically made manifest through lacunae
in the lexicon – missing or underspecified lexicon entries, which typically signifies lacunae in
the underlying ontological world model [37].
      </p>
      <p>Manual data collection for learning still requires large amounts of expensive human time. In
order to gradually alleviate this ineficiency, we have started to experiment with using generative
DL models for this purpose instead of either humans or older-style data analytics. As is well
known, current DL models may still generate incomplete or fallacious outputs. This is why it
might be prudent in a complete system to retain old-style data analytics and human interaction
support for the automatic process of learning knowledge content.</p>
    </sec>
    <sec id="sec-3">
      <title>3. How LEIA Learning Works: An Example</title>
      <p>Suppose, a LEIA receives the input: Systemic sclerosis is a multisystemic autoimmune disease of
unknown origin that afects connective tissue , where all the the lexical material but the words
systemic and sclerosis are already present in its lexicon. OntoSem creates a skeleton lexicon
entry for systemic and, because nouns can refer to either objects or events, two skeleton entries
for sclerosis (Figure 3).</p>
      <p>The ‘?’ mark on the entry heads signifies that they are still being learned. The presence of
meaning procedure calls signifies that the semantics of their arguments is underdetermined and
an attempt must be made to further specify them at runtime. Next, the basic semantics module
ical properties (the current ontology
is based on about 300 properties). Once an ontological concept is created the way
systemicsclerosis was, prompts for all its (local and, if available, inherited) properties are ofered to a
DL model to generate text for further learning.4</p>
    </sec>
    <sec id="sec-4">
      <title>4. Integrating Image Recognition and Learning the Opticon</title>
      <p>To support ontological interpretation of visual inputs LEIAs must be equipped with an opticon.
The approach to automating its acquisition relies is structurally similar to that of the acquisition
of the lexicon in that opticon acquisition may trigger further ontology acquisition. The presence
of a lexicon is a prerequisite for our approach to opticon acquisition. This is because the process
relies on the image recognition system outputting image representations paired with natural
4If the newly learned concept does not specify its ancestors, the latter must be determined; space constraints do not
allow us to describe the process we use for this purpose at this time.
language labels. At the most general level, the steps of opticon learning are as follows:
1. the input is a set of DL-generated embeddings of object tokens and English words labeling
each of the tokens; the token representations are in terms of (unmotivated, deep-learned)
feature vectors generated by a DL model (operating either within a real vision system or
a simulated one);
2. The learning system triggers data collection of sentences containing the label
3. Next, the learning system clusters tokens on the basis of the similarity of their embeddings
to the embedding generated by processing sentences that contain the natural language
label (“target term”), thus efectively disambiguating the label without interpreting them
ontologically.
4. The sentences from the cluster that is the best fit for the embedding generated by image
recognition are fed into the conceptual learning loop of Figure 2 and result in generating
novel skeleton lexicon entries and new or modified existing ontological concepts to
formally interpret their meanings.
5. We then can create an opticon entry indexed by the embedding output by the image
recognition system and listing the newly created (or modified) ontological concept as its
meaning.</p>
      <p>The above process integrates the learning of all the elements of LEIAs’ semantic memory – the
lexicon, the opticon and the ontology. In the next section we present a method of implementing
Step 3 in the above process within a virtual environment, using a simple example.</p>
    </sec>
    <sec id="sec-5">
      <title>5. Experimenting with Integrating DL Models</title>
      <p>
        In [37], a simulator based on the VoxWorld platform [39] was used to generate stochastic object
placements simulating a stacking task.5 In this task, the agent attempts to stack objects with
diferent geometric characteristics on top of a cube. These characteristics result in diferent
behavior when stacking is attempted (e.g., a cube placed correctly will remain in place, a sphere
will almost always roll of, and a cylinder will roll of if placed horizontally but will remain in
place if placed with the flat side oriented downward, etc.). Ontological knowledge, here in the
form of VoxML [
        <xref ref-type="bibr" rid="ref29">29</xref>
        ] specification of object properties was used to bootstrap the generation of
the simulations. Knowledge of the objects’ symmetries was used to create perturbations in the
virtual environment to cause the objects to behave more realistically after their placement. E.g.,
objects placed on their rounded edges are more likely to move in directions perpendicular to
the object’s major axis of symmetry.
      </p>
      <p>The object types explored included cube, sphere, cylinder, capsule, small cube, rectangular
prism, egg, pyramid, and cone and we found that neural approaches can successfully classify
these geometric features into the diferent object types. The vector representations developed
by these neural models come from information directly grounded to object behavior in the
environment.</p>
      <p>Our experiment involved the following steps:
5In this experiment the input is numerical data drawn directly from object interaction in a virtual environment and
does not contain visual information, although as demonstrated in [40, 41, 42], the same principles can be applied to
representations extracted from visual data.</p>
      <p>1. Prompt a language model: Generate sentences containing the target term to be
grounded;
2. Extract token-level representations: For each instance of the target token, extract a
contextualized numerical representation from a transformer encoder (e.g., BERT);
3. Linear regression between paired embedding vectors: Compute a transformation
matrix ℳ ∈ R × R using ridge regression;6
4. Dialogue with a language model: Generate novel sentences containing instances of
the now-“grounded” target term, as well as negative examples.
5. Extract token-level representations: see step 2;
6. Transform new tokens into object space: Multiply extracted novel token embeddings
by precomputed matrix ℳ to transform new tokens into object space.</p>
      <p>The language model used here was OpenAI’s ChatGPT model. The model was given prompts
to generate sentences about objects and their properties in context, e.g., Write 20 short sentences
about how blocks are flat on all sides and can be stacked or Write 20 short sentences about how
balls are round and cannot be stacked. Prompts were engineered this way in order to quickly
generate suficient examples, but in a more authentic dialogue setting, could easily be generated
one at a time with more naturalistic prompts such as “Tell me why it’s hard to stack a sphere
on another object.”</p>
      <p>Here, ℳ results in a structure-preserving afine transformation between subspaces of each
embedding space, where the vectors chosen from the respective embedding space each form
at minimum the spanning set of the relevant subspaces. Because the vectors chosen for the
computation of ℳ each represent similar but non-identical individual instances of either a
contextualized token or an object under interaction, these vector subspaces define
“concept”level representations within the respective models from which they were drawn. An optimal ℳ
will transform new instances (the test set) of object-denoting tokens into or near the subspace
defined by object instances, while non-object-denoting tokens will fall outside of the subspaces
of known objects.</p>
      <p>Results Figure 8 shows a 2D projection of initial results. The transformed novel word
embeddings cluster with the object embeddings such that a K-nearest neighbor classifier will
successfully classify these transformed tokens as references to the correct object. The same
tokens used in their other senses (e.g., “block” in the sense of a city block or “ball” in the sense
of a dance), or completely unconnected tokens will not cluster with any relevant object.</p>
      <p>This initial demonstration shows the feasibility of a learning approach mediated by a
physicsbased interactive event simulator to classify instances of concepts and learn their ontological
interpretation. The example given herein is purposely simple for the sake of clarity. When
starting with a larger ontology, these specific concepts ( block and ball) already exist and need not
be learned, but the same principles apply in an opportunistic-learning setting where the LEIA
encounters some novel entity and needs to learn about it and populate its skeleton ontology for
it.
6In this example ℳ ∈ R768 × R64 (768 = BERT-base embedding size and 64 = object classifier embedding size).
Theoretically the transformation holds without introduction of noise as long as  ≥ . This technique has
previously been used to compute transformations between embedding spaces in [40] and [43].</p>
    </sec>
    <sec id="sec-6">
      <title>6. Discussion</title>
      <p>Space constraints do not allow a detailed discussion of the many eventualities that the learning
framework we describe will face during its use. Similarly, we do not present here anything like a
comprehensive account of the types of elements that LEIAs will learn within this framework. The
objective was to give a high-level overview of the approach and to report the the current status
of a conceptual learning system based on deep language understanding and the incorporation
of the capabilities of DL models with the purpose of making this learning more complete and
less expensive.</p>
      <p>We contend that the proposed framework will benefit both conceptual learning and DL
models. Indeed, the interaction between a DL model and a LEIA can take a push-me-pull-you
turn: the DL model output supports automation of ontology, lexicon and opticon acquisition,
and the content of LEIA knowledge resources may in turn augment the training datasets of DL
models. Retraining the models with LEIAs’ knowledge resources will enhance success of the
LEIA language interpreter in at least a couple of ways: model outputs can be fomulated using
only items in the LEIA lexicon; and model outputs can be at least partially formulated in the
LEIAs’ ontological metalanguage. One way of doing this is to fine-tune the DL model decoder
to output answers using an appropriately restricted vocabulary. That is, in order to instantiate
a concept, place it in the correct place in the ontology and specify its properties—many of
which will be inherited from its ancestor—and make sure that the new concept is diferent from
existing ones, ontological knowledge expressed by the DL model must be expressed in terms
already existing within the ontology in order to make it useful. Technically, this is well-within
the capabilities of modern language model training, either by outright prohibiting certain tokens
from the output (because these terms are not yet present in the ontology), or by manipulating
the bias weights in the model’s output layer to decrease the bias for token IDs that should
appear less frequently.</p>
      <p>While generative language models can be useful for generating explanations about phenomena
that a LEIA may encounter, they sufer from some limitations resulting in challenges that must
be addressed. While a common criticism of large language models is that they are sophisticated
“brains-in-vats” that do not possess external understanding of the texts they generate, more
prosaically, they are also prone to confidently generating output that is syntactically correct and
sounds coherent but is factually incorrect or not self-consistent. These considerations suggest
that, at least for the foreseeable future, the results of learning using the proposed framework
will have to validated either by humans directly or by analyzing any failures of LEIA functioning
due to incorrect or incomplete learning. While we have in the past implemented ergonomic
environments to facilitate inspection of LEIAs’ knowledge resources and processing results,
they remained outside the scope of this paper. We plan to include such a discussion in future
reports.</p>
    </sec>
    <sec id="sec-7">
      <title>7. Conclusion</title>
      <p>This text can be viewed as a methodological position paper describing a novel take on integrating
neural net-oriented and symbolic approaches to building artificial intelligent agents, specifically,
to the task of overcoming the AI knowledge bottleneck using automatic conceptual learning
on the basis of extracting the meaning of natural language texts and interpreting it in terms
of a formal ontological world model. This approach builds on our prior work and extends it
through the use of DL models. The proposed approach: a) uses deep learning models to create
curated collections of sentences; b) uses a bootstrapping knowledge base (ontology, lexicon
and opticon) to extract the meanings of these sentences and represent them in an ontologically
interpreted metalanguage; and c) uses a dedicated (currently) rule-based learning system to
extract from these text meaning representations knowledge elements to enhance and modify
the knowledge infrastructure.</p>
      <p>This approach may be used opportunistically, during the intelligent agent’s regular
taskoriented operation – triggering learning when the agent receives perceptual input that its
existing knowledge substrate does not cover. Alternatively, the learning mode can be deliberate
– for example, when a human decides to teach the agent or when the agent initiates learning
itself by inspecting its knowledge resources and triggering the learning process when lacunae
and/or inconsistencies are encountered.</p>
      <p>This paper presents a bird’s eye view of the proposed methodology, with many details of the
process omitted due to space constraints. Detailed descriptions of the approach, the system
under construction and results of experimentation will be presented in future contributions.</p>
    </sec>
    <sec id="sec-8">
      <title>Acknowledgments</title>
      <p>This work was supported in part by the U.S. Army Research Ofice on grant #W911NF-23-1-0031
to Colorado State University and by grants #N00014-19-1-2708 and #N00014-23-1-2060 from
the U.S. Ofice of Naval Research to Rensselaer Polytechnic Institute. The positions expressed
herein do not reflect the oficial position of the U.S. Department of Defense or the United States
government. Any errors or omissions are the responsibility of the authors.
Processing. Trento, Italy, April, 1992.
[35] S. Nirenburg, T. Oates, J. English, Learning by reading by learning to read, in: International</p>
      <p>Conference on Semantic Computing (ICSC 2007), IEEE, 2007, pp. 694–701.
[36] M. McShane, S. Nirenburg, J. English, Multi-stage language understanding and
actionability., Advances in Cognitive Systems 6 (2018) 119–138.
[37] S. Ghafari, N. Krishnaswamy, Detecting and accommodating novel types and concepts in
an embodied simulation environment, arXiv preprint arXiv:2211.04555 (2022).
[38] S. Nirenburg, M. McShane, J. English, Artificial intelligent agents go to school., in: 34th</p>
      <p>International Workshop on qualitative reasoning, IJCAI-2021, ????
[39] N. Krishnaswamy, W. Pickard, B. Cates, N. Blanchard, J. Pustejovsky, The voxworld
platform for multimodal embodied agents, in: Proceedings of the Thirteenth Language
Resources and Evaluation Conference, 2022, pp. 1529–1541.
[40] D. McNeely-White, B. Sattelberg, N. Blanchard, R. Beveridge, Canonical face embeddings,</p>
      <p>IEEE Transactions on Biometrics, Behavior, and Identity Science 4 (2022) 197–209.
[41] J. Merullo, L. Castricato, C. Eickhof, E. Pavlick, Linearly mapping from image to text
space, arXiv preprint arXiv:2209.15162 (2022).
[42] J. Pustejovsky, N. Krishnaswamy, Multimodal semantics for afordances and actions, in:
Human-Computer Interaction. Theoretical Approaches and Design Methods: Thematic
Area, HCI 2022, Held as Part of the 24th HCI International Conference, HCII 2022, Virtual
Event, June 26–July 1, 2022, Proceedings, Part I, Springer, 2022, pp. 137–160.
[43] A. Nath, R. Ghosh, N. Krishnaswamy, Phonetic, semantic, and articulatory features in
assamese-bengali cognate detection, in: Proceedings of the Ninth Workshop on NLP for
Similar Languages, Varieties and Dialects, 2022, pp. 41–53.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>A.</given-names>
            <surname>Vaswani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Shazeer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Parmar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Uszkoreit</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Jones</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. N.</given-names>
            <surname>Gomez</surname>
          </string-name>
          , Ł. Kaiser,
          <string-name>
            <surname>I. Polosukhin</surname>
          </string-name>
          ,
          <article-title>Attention is all you need</article-title>
          ,
          <source>Advances in neural information processing systems</source>
          <volume>30</volume>
          (
          <year>2017</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>J.</given-names>
            <surname>Devlin</surname>
          </string-name>
          , M.-
          <string-name>
            <given-names>W.</given-names>
            <surname>Chang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Toutanova</surname>
          </string-name>
          , BERT:
          <article-title>Pre-training of deep bidirectional transformers for language understanding</article-title>
          ,
          <source>in: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies</source>
          , Volume
          <volume>1</volume>
          (Long and Short Papers),
          <source>Association for Computational Linguistics</source>
          , Minneapolis, Minnesota,
          <year>2019</year>
          , pp.
          <fpage>4171</fpage>
          -
          <lpage>4186</lpage>
          . URL: https://aclanthology.org/ N19-1423. doi:
          <volume>10</volume>
          .18653/v1/
          <fpage>N19</fpage>
          -1423.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>A.</given-names>
            <surname>Radford</surname>
          </string-name>
          ,
          <string-name>
            <surname>J. Wu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Child</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Luan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Amodei</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I.</given-names>
            <surname>Sutskever</surname>
          </string-name>
          , et al.,
          <article-title>Language models are unsupervised multitask learners</article-title>
          ,
          <source>OpenAI blog 1</source>
          (
          <year>2019</year>
          )
          <article-title>9</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>T.</given-names>
            <surname>Brown</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Mann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Ryder</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Subbiah</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. D.</given-names>
            <surname>Kaplan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Dhariwal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Neelakantan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Shyam</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Sastry</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Askell</surname>
          </string-name>
          , et al.,
          <article-title>Language models are few-shot learners</article-title>
          ,
          <source>Advances in neural information processing systems</source>
          <volume>33</volume>
          (
          <year>2020</year>
          )
          <fpage>1877</fpage>
          -
          <lpage>1901</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>E. M.</given-names>
            <surname>Bender</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Koller</surname>
          </string-name>
          ,
          <article-title>Climbing towards nlu: On meaning, form, and understanding in the age of data, in: Proceedings of the 58th annual meeting of the association for computational linguistics</article-title>
          ,
          <year>2020</year>
          , pp.
          <fpage>5185</fpage>
          -
          <lpage>5198</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>E. M.</given-names>
            <surname>Bender</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Gebru</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>McMillan-Major</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Shmitchell</surname>
          </string-name>
          ,
          <article-title>On the dangers of stochastic parrots: Can language models be too big?</article-title>
          ,
          <source>in: Proceedings of the 2021 ACM conference on fairness, accountability, and transparency</source>
          ,
          <year>2021</year>
          , pp.
          <fpage>610</fpage>
          -
          <lpage>623</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>T.</given-names>
            <surname>Niven</surname>
          </string-name>
          , H.-Y. Kao,
          <article-title>Probing neural network comprehension of natural language arguments, in: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Association for Computational Linguistics</article-title>
          , Florence, Italy,
          <year>2019</year>
          , pp.
          <fpage>4658</fpage>
          -
          <lpage>4664</lpage>
          . URL: https://aclanthology.org/P19-1459. doi:
          <volume>10</volume>
          .18653/v1/
          <fpage>P19</fpage>
          -1459.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>M. M. Waldrop</surname>
          </string-name>
          ,
          <article-title>What are the limits of deep learning?</article-title>
          ,
          <source>Proceedings of the National Academy of Sciences</source>
          <volume>116</volume>
          (
          <year>2019</year>
          )
          <fpage>1074</fpage>
          -
          <lpage>1077</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>N.</given-names>
            <surname>Sünderhauf</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Brock</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Scheirer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Hadsell</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Fox</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Leitner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Upcroft</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Abbeel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Burgard</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Milford</surname>
          </string-name>
          , et al.,
          <article-title>The limits and potentials of deep learning for robotics</article-title>
          ,
          <source>The International journal of robotics research 37</source>
          (
          <year>2018</year>
          )
          <fpage>405</fpage>
          -
          <lpage>420</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>D.</given-names>
            <surname>Gunning</surname>
          </string-name>
          ,
          <source>Explainable artificial intelligence (XAI)</source>
          ,
          <source>MIT Research Lab Technical Report, Defense Advanced Research Projects Agency (DARPA)</source>
          ,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>M. McShane</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Nirenburg</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Beale</surname>
          </string-name>
          ,
          <article-title>Language understanding with ontological semantics</article-title>
          ,
          <source>Advances in Cognitive Systems</source>
          <volume>4</volume>
          (
          <year>2016</year>
          )
          <fpage>35</fpage>
          -
          <lpage>55</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>J.</given-names>
            <surname>Mao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Gan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Kohli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. B.</given-names>
            <surname>Tenenbaum</surname>
          </string-name>
          ,
          <string-name>
            <surname>J. Wu,</surname>
          </string-name>
          <article-title>The neuro-symbolic concept learner: Interpreting scenes, words, and sentences from natural supervision</article-title>
          ,
          <source>in: International Conference on Learning Representations</source>
          ,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>A. d.</given-names>
            <surname>Garcez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Gori</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. C.</given-names>
            <surname>Lamb</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Serafini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Spranger</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. N.</given-names>
            <surname>Tran</surname>
          </string-name>
          ,
          <article-title>Neural-symbolic computing: An efective methodology for principled integration of machine learning and reasoning</article-title>
          , arXiv preprint arXiv:
          <year>1905</year>
          .
          <volume>06088</volume>
          (
          <year>2019</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>A. d.</given-names>
            <surname>Garcez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Bader</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Bowman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. C.</given-names>
            <surname>Lamb</surname>
          </string-name>
          , L. de Penning,
          <string-name>
            <given-names>B.</given-names>
            <surname>Illuminoo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Poon</surname>
          </string-name>
          ,
          <string-name>
            <surname>C.</surname>
          </string-name>
          <article-title>Gerson Zaverucha, Neural-symbolic learning and reasoning: A survey and interpretation</article-title>
          ,
          <source>Neuro-Symbolic Artificial Intelligence: The State of the Art</source>
          <volume>342</volume>
          (
          <year>2022</year>
          )
          <article-title>1</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <surname>D. B. Lenat</surname>
          </string-name>
          ,
          <article-title>Cyc: A large-scale investment in knowledge infrastructure</article-title>
          ,
          <source>Communications of ACM</source>
          <volume>38</volume>
          (
          <year>1995</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <article-title>For ai, data are harder to come by than you think</article-title>
          ., The
          <string-name>
            <surname>Economist</surname>
          </string-name>
          (
          <year>2020</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <surname>M. McShane</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Beale</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Nirenburg</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          <string-name>
            <surname>Jarrell</surname>
          </string-name>
          , G. Fantry,
          <article-title>Inconsistency as a diagnostic tool in a society of intelligent agents</article-title>
          ,
          <source>Artificial Intelligence in Medicine</source>
          <volume>55</volume>
          (
          <year>2012</year>
          )
          <fpage>137</fpage>
          -
          <lpage>148</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>S.</given-names>
            <surname>Nirenburg</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>McShane</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Beale</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Wood</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Scassellati</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Magnin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Roncone</surname>
          </string-name>
          ,
          <article-title>Toward human-like robot learning</article-title>
          ,
          <source>in: International Conference on Applications of Natural Language to Information Systems</source>
          , Springer,
          <year>2018</year>
          , pp.
          <fpage>73</fpage>
          -
          <lpage>82</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <surname>M. McShane</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Nirenburg</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <article-title>English, Multi-stage language understanding and actionability</article-title>
          ,
          <source>Advances in Cognitive Systems</source>
          <volume>6</volume>
          (
          <year>2018</year>
          )
          <fpage>1</fpage>
          -
          <lpage>20</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <surname>M. McShane</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Nirenburg</surname>
          </string-name>
          ,
          <article-title>Linguistics for the Age of AI</article-title>
          , MIT Press,
          <year>2021</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>S.</given-names>
            <surname>Nirenburg</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Raskin</surname>
          </string-name>
          , Ontological semantics, Mit Press,
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>J.</given-names>
            <surname>English</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Nirenburg</surname>
          </string-name>
          ,
          <article-title>Ontoagent: implementing content-centric cognitive models</article-title>
          ,
          <source>in: Proceedings of the Annual Conference on Advances in Cognitive Systems</source>
          ,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <given-names>N.</given-names>
            <surname>Krishnaswamy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Narayana</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Rim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Bangar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Patil</surname>
          </string-name>
          , G. Mulay,
          <string-name>
            <given-names>R.</given-names>
            <surname>Beveridge</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Ruiz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Draper</surname>
          </string-name>
          , et al.,
          <article-title>Communicating and acting: Understanding gesture in simulation semantics</article-title>
          ,
          <source>in: IWCS 2017-12th International Conference on Computational Semantics-Short papers</source>
          ,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <given-names>N.</given-names>
            <surname>Krishnaswamy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Friedman</surname>
          </string-name>
          ,
          <string-name>
            <surname>J. Pustejovsky,</surname>
          </string-name>
          <article-title>Combining deep learning and qualitative spatial reasoning to learn complex structures from sparse examples with noise</article-title>
          ,
          <source>in: Proceedings of the AAAI Conference on Artificial Intelligence</source>
          , volume
          <volume>33</volume>
          ,
          <year>2019</year>
          , pp.
          <fpage>2911</fpage>
          -
          <lpage>2918</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [25]
          <string-name>
            <given-names>J.</given-names>
            <surname>Pustejovsky</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Krishnaswamy</surname>
          </string-name>
          ,
          <article-title>Embodied human computer interaction</article-title>
          ,
          <source>KI-Künstliche Intelligenz</source>
          <volume>35</volume>
          (
          <year>2021</year>
          )
          <fpage>307</fpage>
          -
          <lpage>327</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [26]
          <string-name>
            <given-names>N.</given-names>
            <surname>Krishnaswamy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Pustejovsky</surname>
          </string-name>
          ,
          <article-title>Afordance embeddings for situated language understanding</article-title>
          ,
          <source>Frontiers in Artificial Intelligence</source>
          <volume>5</volume>
          (
          <year>2022</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          [27]
          <string-name>
            <surname>M. McShane</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Nirenburg</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          <string-name>
            <surname>Jarrell</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          <string-name>
            <surname>Fantry</surname>
          </string-name>
          ,
          <article-title>Learning components of computational models from texts</article-title>
          ,
          <source>in: 6th Workshop on Computational Models of Narrative (CMN</source>
          <year>2015</year>
          ),
          <source>Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik</source>
          ,
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          [28]
          <string-name>
            <given-names>S.</given-names>
            <surname>Nirenburg</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Wood</surname>
          </string-name>
          ,
          <article-title>Toward human-style learning in robots</article-title>
          ,
          <source>in: AAAI Fall Symposium on Natural Communication with Robots</source>
          ,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          [29]
          <string-name>
            <given-names>J.</given-names>
            <surname>Pustejovsky</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Krishnaswamy</surname>
          </string-name>
          ,
          <article-title>Voxml: A visualization modeling language</article-title>
          ,
          <source>in: Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)</source>
          ,
          <year>2016</year>
          , pp.
          <fpage>4606</fpage>
          -
          <lpage>4613</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          [30]
          <string-name>
            <given-names>E.</given-names>
            <surname>Viegas</surname>
          </string-name>
          ,
          <string-name>
            <surname>S. Nirenburg,</surname>
          </string-name>
          <article-title>The ecology of lexical acquisition: Computational lexicon making process</article-title>
          ,
          <source>in: Proceedings of Euralex</source>
          , volume
          <volume>96</volume>
          ,
          <year>1996</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          [31]
          <string-name>
            <given-names>S.</given-names>
            <surname>Nirenburg</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Raskin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Sheremetyeva</surname>
          </string-name>
          ,
          <article-title>Lexical acquisition</article-title>
          ,
          <source>NATO Science Series Subseries III Computer and Systems Sciences</source>
          <volume>188</volume>
          (
          <year>2003</year>
          )
          <fpage>133</fpage>
          -
          <lpage>172</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref32">
        <mixed-citation>
          [32]
          <string-name>
            <given-names>S.</given-names>
            <surname>Nirenburg</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>McShane</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>English</surname>
          </string-name>
          ,
          <article-title>Content-centric computational cognitive modeling</article-title>
          ,
          <source>Adv. Cogn. Syst</source>
          (
          <year>2020</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref33">
        <mixed-citation>
          [33]
          <string-name>
            <given-names>S.</given-names>
            <surname>Nirenburg</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>McShane</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>English</surname>
          </string-name>
          ,
          <article-title>Overcoming the knowledge bottleneck using lifelong learning by social agents</article-title>
          ,
          <source>in: International Conference on Applications of Natural Language to Information Systems</source>
          , Springer,
          <year>2021</year>
          , pp.
          <fpage>24</fpage>
          -
          <lpage>29</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref34">
        <mixed-citation>
          [34]
          <string-name>
            <given-names>S.</given-names>
            <surname>Nirenburg</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Shell</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Cohen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Cousseau</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Grannes</surname>
          </string-name>
          ,
          <string-name>
            <surname>C.</surname>
          </string-name>
          <article-title>McNeilly, The translator's workstation</article-title>
          ,
          <source>in: Proceedings of the 3rd Conference on Applied Natural Language</source>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>