<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>Workshop on Humanities-Centred Artificial Intelligence (CHAI)
* Corresponding author.
$ schif@ifis.uni-luebeck.de (S. Schif); bender@ifis.uni-luebeck.de (M. Bender); moeller@ifis.uni-luebeck.de
(R. Möller)
 https://www.ifis.uni-luebeck.de/index.php (S. Schif); https://www.ifis.uni-luebeck.de/index.php (M. Bender);
https://www.ifis.uni-luebeck.de/index.php (R. Möller)</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Embodiment of an Agent by a Pepper Robot for Explaining Retrieval Results</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Simon Schif</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Magnus Bender</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Ralf Möller</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>University of Lübeck, Institute of Information Systems</institution>
          ,
          <addr-line>Ratzeburger Allee 160, 23562 Lübeck</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2022</year>
      </pub-date>
      <volume>000</volume>
      <fpage>0</fpage>
      <lpage>0002</lpage>
      <abstract>
        <p>Conceptually, an agent perceives its environment through sensors, builds a set of models, and then uses these models to select an appropriate action to fulfill its goals. As long as an agent is embodied by a robot, even humans that are not familiar with the concept of an agent, are more likely aware of the presence of an individual, independent of how the agent maps state sequences to actions, than if an agent is part of a web application. In the latter, agents are sometimes visualized as an animation, such as Clippy by Microsoft. Thus, depending on the context, it is often explicitly desired, that humans are aware of an individual, while they interact with a system. Our aim is to demonstrate the prototype of our information retrieval (IR) agent, running in the background of our information system (IS), implemented for humanities scholars. Instead of animating our IR agent, we embodied it by a Pepper robot for demonstration purposes only. Pepper is a humanoid robot especially designed for the interaction with humans, as he has among others a speech-to-text and text-to-speech module allowing for a verbal conversation between a human and the robot. We tested our approach with humans of which not everyone was familiar with the concept of an IR agent. During the interaction with our IS, Pepper explains, as the IR agent, his behavior. The embodiment of our IR agent, using Pepper, helps to understand the concept of an IR agent and that it is running in the background of our IS, without explaining that explicitly.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Agent</kwd>
        <kwd>Robot</kwd>
        <kwd>Information Retrieval</kwd>
        <kwd>Demonstration</kwd>
        <kwd>Curated Datasets</kwd>
        <kwd>Information System</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        An agent in pursuit of a task perceives its environment through sensors, builds a set of models,
and then uses these models to select an appropriate action to fulfill its goals [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. It is perceived
as being intelligent depending on which actions are selected, given the current state of its
environment and its goals, regardless of which (artificial intelligence (AI)) methods are in
use to map state sequences to actions. One of these goals could be for instance to satisfy the
information need of a human. In this case, an IR agent, that has access to a large corpus of
documents receives a query and its goal is to assign to each document in its corpus a score,
given the query. Top  highest scored documents are returned to the human in descending
order. Assuming, the query and the documents are sequences of words, then functions such
as TF.IDF, assigning a score to each query, document pair, have been shown to be efective in
practice [
        <xref ref-type="bibr" rid="ref2 ref3">2, 3</xref>
        ]. However, such an IR agent, is not necessarily perceived as being intelligent.
      </p>
      <p>
        For instance, given a corpus of fantasy novels and the query “bike repair shop”, an IR agent
would return documents that are the most relevant ones with respect to the query, but not with
respect to the information need of a human, that has a bike with a flat tire. The IR agent should
approximate the true information need of the human from the query and the expectations the
human has about the IR agent itself. The human expects to retrieve at least a document from the
IR agent, containing a list of bike repair shops, of which the agent is unable to return. If the IR
agent is able to identify the gap between the expectations of the human and its ability to satisfy
its information need, then it can select an appropriate action not only given its goal to chose
the most relevant documents, given the query. Additionally, it can act legible by explaining its
behavior, if the gap is too large. That is a step towards to gain trust by the human and thus to
be perceived as being intelligent. In this work, we implemented and evaluated our IR agent, as
an extension to our IS, implemented as a web application [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ].
      </p>
      <p>The web application enables humanities scholars to upload Word documents for the creation
of curated datasets. Uploaded Word documents are parsed, preprocessed, and depending on
the context, split into several documents. For instance, a Word document containing hundreds
of poems is split into a corpus of documents, where each document contains one poem. We
have created for various types of documents, such as poems, viewer to view the contents of the
documents at the web application. Additionally, links are created automatically, that help to
jump between, for instance, words in poems and their corresponding entries in a dictionary.</p>
      <p>
        Our IR agent, part of the web application, not only ranks uploaded documents, given a
query, by relevance in descending order, it additionally returns for each document and score an
explanation, in order to act legible. A web application, providing a search interface, is usually
accessed by a human, using a tablet, smartphone, laptop, etc. does not expect to interact with
an IR agent. Our aim is to evaluate our IR agent with real humans, who know that there is an
IR agent, acting in the background. That can be solved by either explaining the concept of an IR
agent, as we do in this paper or to embody our IR agent with a robot, having a text-to-speech
module. The robot verbally explains its behavior on demand and humans are aware of that there
is an IR agent running in the background, which changes their expectations and perceptions,
while using our IS. We evaluate our approach, by using a Pepper robot [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. As has been shown,
the Pepper robot is a very efective tool to show to others what happens in the background of
our IS system, without explaining explicitly the concept of an IR agent.
      </p>
      <p>We introduce in Section 2 our IS that we extend with our IR agent we present in Section 3. In
Section 4 we show how we use a Pepper robot to present our work to an audience, where some
never heard of an IR agent before. Finally, we present related work in Section 6, conclude our
results in Section 7, and give an outlook for future research directions.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Web Application</title>
      <p>
        Humanities scholars work with specific tools and document formats across chronological and
geographical borders to reach their goals. For instance, the goal is to produce a critical edition,
from a large collection of palm-leave manuscripts and editions, such as [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] created by Eva
Wilden. A critical edition contains the trajectory a text made through various manuscript and
print versions into the modern days. Producing a critical edition can take up to several years
and often many humanities scholars are involved. Regardless of preferred document formats
and tools in use, a finished critical edition is mostly published as a printed book or online
as a PDF. We argue that this violates the FAIR (Findable, Accessible, Interoperable, Reuse)
principles. Findable is often not a problem at all since published books have mostly associated
metadata to be findable, by humans and machines. However, the contents of a critical edition
are possibly not searchable and require a faceted IR system. Accessibility does not only account
for of how the data is accessible, additionally it is important to make clear who is allowed to
access what. For instance, not everyone is allowed to access some pictures of manuscripts in the
printed books, but everything else. Only those who are allowed to see the images, are allowed
to access the printed critical edition, as the images are inseparable from the rest of the book. A
printed book or a PDF is made for humans to be readable and not to be interoperable with other
programs except those that visualize or print the contents of a PDF. Finally, metadata should
be well-described, such that other programs can reuse the associated data. A critical edition
that does not violate the FAIR principles allows for faceted searches, automatic linking, access
control, and the transformation of contents into various formats. However, humanities scholars
prefer to use what you see is what you get (WYSIWYG) tools such as Microsoft Word, as they
see always the current state of the book. Our web application allows humanities scholars to
still work with their preferred tools, such as Microsoft Word, and document formats across
chronological and geographical borders, and yet to produce data that does not violate the FAIR
principles. Word documents can be uploaded at our web application, specific parts that are
written in a specific controlled natural language, are parsed, split, and loaded into a database.
The parser is automatically generated from an Antlr4 [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] grammar, allowing to be adapted
easily to other types of documents. Viewer, part of the web application, are used in lectures for
the visualization of the contents of the database. Additionally, one can merge specific parts of
documents automatically on demand, which would take a humanities scholar weeks of work
[
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. Among these features and those we would like to add in the future, we implemented an IR
agent, we present in Section 3 and evaluate in Section 4, by embodying it by a Pepper robot.
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. Information Retrieval Agent</title>
      <p>
        Word documents, containing hundreds of texts, such as poems, are treated each as corpora of
texts, where each text is a document. Given a word and its context, part of a document, one
could be interested in other documents, containing text snippets within the same context. We
assume that the surrounding words of a word within a text make up the context and refer to
the context to as subjective content descriptionss (SCDs) [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. Our IR agent assigns a score to
all text snippets within all documents in the corpora, given a word and its context, as a query.
Additionally, our IR agent adds an explanation for each score it assigns to the text snippets.
Finally, all text snippets are returned to the human along with the associated document and an
explanation in descending order sorted by score.
      </p>
      <p>More formally, our IR agent has access to a set of documents  part of a corpus . Each
1
2
3
4
5
6
7
8
9
10
11
12
13
)


(
document  is a sequence of words ⟨︀ 1, . . . , ⟩︀ of length . We assume that the surrounding
words {︁ |  −  ≤  ≤  + }︁ of a word  within a given radius  make up the context of
the word . As depicted in Figure 1, document  is a sequence of words ⟨1, . . . , 13⟩. The
context is highlighted in red cross lines, each for the words 5, 6, 7, and 8 respectively.
For instance, the words that make up the context for the word 7 are {︀ 3, . . . , 11}︀ , as
depicted in the third row in Figure 1. In Algorithm 1, we show how to initially compute
for each word in every document in the corpus, the words, that make up the context. The
result is a mapping  that maps all words  to a set of words, that make up the context:
 :  →</p>
      <p>{︁−, . . . , +}︁, with  being the current document,  being being the position
of word  in document , and  the radius. All sets of words in one document, have possibly
◁ Corpus  and radius 
◁ Remove stop words from document 
◁ Length || of document 
2:
3:
4:
5:
6:
7:
8:
9:
Algorithm 1 Compute Windows
1: procedure contextWindows(, )
 :  →
for all  ∈  do
{︁</p>
      <p>−, . . . , +}︁
removeStopWords()
for  ←</p>
      <p>to || −  do
  )︁ ← {}</p>
      <p>︁(
for  ←
  )︁ ←   )︁
︁(</p>
      <p>︁(
 −  to  +  do
∪ {︁ }︁</p>
      <p>return 
◁ Return mapping 
similar sets in other documents. We measure the similarity of two sets by the size of their
intersection (i.e. the number of words they have in common). If it is above a given threshold ,
then we assume that both contexts are similar up to an extend.</p>
      <p>Algorithm 2 returns a mapping , mapping words in all documents to text snippets in other
documents, from the same context, if the similarity is above a given threshold . A human, that
is interested in text snippets from a similar context, sends a word as a query to our IR agent.
Our IR agent returns all documents of similar context, given a word as a query, that contain text
snippets returned by mapping , with respect to the similarity of the text snippets in descending
2:
3:
4:
5:
6:
7:
8:
order.</p>
      <p>Algorithm 2 Compute Results
1: procedure computeResults(, )
 :  →
for all 
for all 
 ←
∈  do
⃒
⃒
⃒ 
︁(
∈  do
 )︁
{︁ (︁

 then
◁ Contexts  and threshold 
return 
◁ Return mapping</p>
      <p>Given the -th word  in document , the context of the word  (︁  )︁ , a text snippet
  )︁
︁(
∈   )︁ , with (︁
︁(
 )︁
=</p>
      <p>{︁− , . . . , +}︁, from another document  ̸= ,
and radius , our IR agent has to generate an explanation, of why it has returned the document
, among others, as a result for the query  . It generates for each document  in the
result set, an explanation, by returning an excerpt from each document, that contains the
words  (︁  )︁
∈   )︁ . Each excerpt is a sequence of words containing  (︁  )︁ , of which
︁(
  )︁
︁(
∩   )︁ are emphasized. The human can decide to send another query to the IR
︁(
agent, and to change radius  or threshold . Even if results do not satisfy the information need
of the human, the IR agent acts legible from the perspective of the human. Changing  and 
each to a value that leads possibly to more sophisticated results, is possible by the human, as
the IR agent explains of how it computes a result.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Evaluation</title>
      <p>
        At the open day of the Centre for the Study of Manuscript Cultures (CSMC)1 we presented our
web application to an audience, at where not everyone is familiar with the concept of an IR
agent. Instead of explaining the concept of an IR agent, we embodied the IR agent by the Pepper
robot [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. The experimental setup is depicted in Figure 2. The presenter sits in front of a table
with a laptop, hosting the web application as well as running a web browser for accessing the
web application. A 75 inch large screen is behind the presenter, in a height such that the whole
screen is visible for all visitors in front of the table, mirroring the screen of the laptop. Pepper
stands on the left hand site of the table, near enough for the visitors to see the contents of the
1https://www.csmc.uni-hamburg.de/openday-en.html
      </p>
      <p>
        Visitor
display, a TCC8925 processor with a single ARMv7 A5 core up to 833 MHz, and 1 GB RAM [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ].
It currently runs Android 6.0 “Marshmallow”, allowing to install Android apps from the oficial
Google Play store and to deploy self developed Android apps. Due to the hardware limitations
of the tablet, even the Android interface itself is sometimes jerky, therefore the graphical design
of Android apps is limited up to an extend.
      </p>
      <p>Pepper is equipped with a text-to-speech module, that can be accessed over an application
programming interface (API), when one develops an Android app, that runs on the tablet of
Pepper. The tablet, then sends texts over the internal network to the machine inside Peppers
head, that is responsible, among others, for translating text into speech, that then the human can
hear over Peppers speakers. As depicted in Figure 3, the laptop, running the web application, is
connected with Pepper over a network. We developed an Android app, that opens a WebSocket
in the background, for receiving texts from a JavaScript interface, accessible using our web
application. Texts are then forwarded over the API to the text-to-speech module inside Peppers
head. As we added a web view to our Android app, our web application can be used both on
the laptop and directly on the tablet of the Pepper robot. We have created a video of pepper for
demonstration purposes.2</p>
      <p>Approximately 20 visitors, from the humanities, chemistries, biologies, and computer sciences,
have visited our stand at the open day of the CSMC. Only the computer scientists have heard of
the concept of an IR agent at beforehand. The presenter uses the web application on the laptop to
ifrst upload a Word document. Pepper then explains how he processes the uploaded document,
as if he is the IR agent in the background, as described in Section 2. After the document is
2https://www.fdr.uni-hamburg.de/record/10769/files/KI2022_CHAI-presentation4.zip</p>
      <sec id="sec-4-1">
        <title>Router</title>
      </sec>
      <sec id="sec-4-2">
        <title>Laptop</title>
      </sec>
      <sec id="sec-4-3">
        <title>Screen</title>
      </sec>
      <sec id="sec-4-4">
        <title>Pepper Robot</title>
        <p>processed, its title is visible at the web application. It possibly consists of several texts, that are
each treated as a document and loaded into a database. All documents in the database can be
listed and its contents can be viewed with a viewer at our web application. Words  with
( ) ̸= ∅ are highlighted at the web application as to be clickable, while the others  with
( ) = ∅ are not. The visitors decide on which word the presenter should click. Finally,
Pepper as the IR agent, explains how it computes the results, as described in Section 3. As far as
we can tell from feedback and questions in return to our presentation, all of the visitors were
able to understand, of how our IR agent computes the results, given a query, that our IR agent
is running in the background, and that the results are relevant. It was not necessary to explain
all the technical details, as we do in Section 2 and Section 3.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Human Aware IR Agent</title>
      <p>
        Visitors are aware of an IR agent, running in the background of our IS, implemented as a web
application, as we embodied our IR agent by a Pepper robot. The Pepper robot explains as
the IR agent, of how it processes documents and returns them sorted descending by a score it
assigns to each of them, given a query. As we propose in [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ], our IR agent can greatly improve
its performance, if it would be aware of the human, such that they then can collaboratively
seek for information. We refer to such an IR agent to as a human-aware IR agent, at where the
human and the IR agent are modeled with their mental models ℳH and ℳ̃︂A respectively, as
depicted in Figure 4. On the left hand side, the IR agent approximates the information need of
the human ℳH as ℳ̃︂aH. However, the IR agent has its own mental-model ̃︂A, containing the
information need of the human, from the perspective of the IR agent. This is comparable to a
customer explaining to an IT-specialist what requirements an application to be developed has
to meet. The IT specialist has years of experience, identical to our IR agent that is able to go
through all documents in a corpus it has access to, and knows that the program has to meet
more than the customers requirements to work properly.
      </p>
      <p>A human, sending a query to our IR agent, is aware of our IR agents mental-model ℳ̃︂A and
the IR agent itself is aware of that, as depicted on the right hand side in Figure 4. The human
ℳH
ℳ̃︂aH
ℳ̃︂A
ℳH</p>
      <p>A
ℳh
ℳH
ℳ̃︂hA′
ℳ̃︂A
and ℳ̃︂hA’ is too large, then the IR agent’s behavior is not explicable and it should explain its
behavior. As in the example before, a human and an IT specialist aim to find all requirements a
program has to meet. The human expects that the IT specialist has experience in developing an
application and possibly expects suggestions for improvements. If the IT specialist notes, that
the human does not understand his suggestions, then the specialist should explain them. The
human is more likely aware of ℳ̃︂A if the IR agent is embodied by a robot or animated, as we
have shown in Section 4.</p>
    </sec>
    <sec id="sec-6">
      <title>6. Related Work</title>
      <p>
        The animation of an agent is often done to make humans aware of that an actual agent is running
in the background, which can improve the collaboration between humans and agents and to
make agents more life-alike [
        <xref ref-type="bibr" rid="ref11 ref12">11, 12</xref>
        ]. Even humans are more likely aware of the copresence of
other humans, if they are animated as an avatar [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]. However, as has been shown in the past,
the animation of an agent is not suficient at all, as it has turned out with Clippy [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ]. Among
other things, Clippy often interrupts a person to provide assistance even though no help is
needed and even if needed, the goals of humans are often wrongly anticipated. As Kambhampati
et al. note in [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ], this problem has not yet been solved in the field of robotics, where agents are
embodied by a robot, but crucial for the collaboration between a human and a robot. Li et al.
note in a survey that humans perceive agents more positively, when they are embodied by a
robot that is physically on site rather then virtually present or animated [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ]. Thellman et al. in
[
        <xref ref-type="bibr" rid="ref16">16</xref>
        ] add that there might be no diference, with respect to of its social presence, but note that
their study is domain-specific and short. Our contribution is to first make humans aware of our
IR agent, running in the background of our web application and then, as a future work, to make
our IR agent aware of the humans interacting with it.
      </p>
    </sec>
    <sec id="sec-7">
      <title>7. Conclusion and Future Work</title>
      <p>As has been shown at our demonstration, we do not need to explicitly explain the concept of
our IR agent, if it is embodied by a humanoid robot, such as Pepper. Currently, we use the API
of Pepper, such that it speaks out what the web application sends to it. The API provides more
than that and we aim to extend our IR agent demonstration, such that visitors can interact with
it using the speech-to-text and text-to-speech modules inside the machine of Peppers head.
That allows for perceiving our IR agent even more as an individual, that aims to collaboratively
seek together with humans for information and thereby to satisfy their information needs.</p>
      <p>
        As mentioned in Section 5, an IR agent can greatly improve its performance if it is
humanaware. We will further develop our IR agent [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ], such that it is human-aware and then can be
embedded in the Pepper robot for demonstration purposes.
      </p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>S.</given-names>
            <surname>Russel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Norvig</surname>
          </string-name>
          ,
          <source>Artificial Intelligence: A Modern Approach</source>
          ,
          <year>2021</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>K. S.</given-names>
            <surname>Jones</surname>
          </string-name>
          ,
          <article-title>A statistical interpretation of term specificity and its application in retrieval</article-title>
          ,
          <source>Journal of documentation 28</source>
          (
          <year>2004</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>J.</given-names>
            <surname>Beel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Gipp</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Langer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Breitinger</surname>
          </string-name>
          ,
          <article-title>Paper recommender systems: a literature survey</article-title>
          ,
          <source>International Journal on Digital Libraries</source>
          <volume>17</volume>
          (
          <year>2016</year>
          )
          <fpage>305</fpage>
          -
          <lpage>338</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>S.</given-names>
            <surname>Schif</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Melzer</surname>
          </string-name>
          , E. Wilden,
          <string-name>
            <given-names>R.</given-names>
            <surname>Möller</surname>
          </string-name>
          ,
          <article-title>TEI-Based Interactive Critical Editions</article-title>
          ,
          <source>in: International Workshop on Document Analysis Systems</source>
          , Springer,
          <year>2022</year>
          , pp.
          <fpage>230</fpage>
          -
          <lpage>244</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>A. K.</given-names>
            <surname>Pandey</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Gelin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A</given-names>
            <surname>Mass-Produced Sociable</surname>
          </string-name>
          Humanoid Robot,
          <source>IEEE Robotics &amp; Automation Magazine</source>
          <volume>25</volume>
          (
          <year>2018</year>
          )
          <fpage>40</fpage>
          -
          <lpage>48</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>E.</given-names>
            <surname>Wilden</surname>
          </string-name>
          ,
          <article-title>A Critical Edition and an Annotated Translation of the Akana¯n_ u¯_ru: Part 1</article-title>
          ,
          <string-name>
            <surname>Kali</surname>
            <given-names>_</given-names>
          </string-name>
          <article-title>r_riya¯ n_ainirai. Old commentary on Kali_r_riya¯ n_ainirai KV - 90, word index of Akana¯ n_u¯_ru KV - 120, École Française d'Extrême-</article-title>
          <string-name>
            <surname>Orient</surname>
          </string-name>
          ,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>T.</given-names>
            <surname>Parr</surname>
          </string-name>
          ,
          <source>The Definitive ANTLR 4 Reference</source>
          , Pragmatic Bookshelf,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>F.</given-names>
            <surname>Kuhr</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Braun</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Bender</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Möller</surname>
          </string-name>
          , To Extend or not to Extend?
          <article-title>Context-Specific Corpus Enrichment</article-title>
          ,
          <source>in: Australasian Joint Conference on Artificial Intelligence</source>
          , Springer,
          <year>2019</year>
          , pp.
          <fpage>357</fpage>
          -
          <lpage>368</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>S.</given-names>
            <surname>Schif</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Möller</surname>
          </string-name>
          ,
          <article-title>On Human-Aware Information Seeking</article-title>
          , in: CHAI@KI,
          <year>2021</year>
          , pp.
          <fpage>31</fpage>
          -
          <lpage>39</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>S.</given-names>
            <surname>Kambhampati</surname>
          </string-name>
          ,
          <article-title>Challenges of Human-Aware AI Systems</article-title>
          , CoRR abs/
          <year>1910</year>
          .07089 (
          <year>2019</year>
          ). URL: http://arxiv.org/abs/
          <year>1910</year>
          .07089.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>T.</given-names>
            <surname>Holz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Dragone</surname>
          </string-name>
          ,
          <string-name>
            <surname>G. M. O'Hare</surname>
          </string-name>
          , Where Robots and Virtual Agents Meet,
          <source>International Journal of Social Robotics</source>
          <volume>1</volume>
          (
          <year>2009</year>
          )
          <fpage>83</fpage>
          -
          <lpage>93</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>M.</given-names>
            <surname>Thiebaux</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Marsella</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. N.</given-names>
            <surname>Marshall</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Kallmann</surname>
          </string-name>
          ,
          <article-title>Smartbody: Behavior Realization for Embodied Conversational Agents</article-title>
          ,
          <source>in: Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems-Volume</source>
          <volume>1</volume>
          ,
          <year>2008</year>
          , pp.
          <fpage>151</fpage>
          -
          <lpage>158</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>M.</given-names>
            <surname>Gerhard</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Moore</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Hobbs</surname>
          </string-name>
          ,
          <article-title>Embodiment and copresence in collaborative interfaces</article-title>
          ,
          <source>International Journal of Human-Computer Studies</source>
          <volume>61</volume>
          (
          <year>2004</year>
          )
          <fpage>453</fpage>
          -
          <lpage>480</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>N.</given-names>
            <surname>Baym</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Shifman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Persaud</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Wagman</surname>
          </string-name>
          , Intelligent Failures:
          <article-title>Clippy Memes and the Limits of Digital Assistants</article-title>
          ,
          <source>AoIR Selected Papers of Internet Research</source>
          (
          <year>2019</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>J.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <article-title>The Benefit of Being Physically Present: A Survey of Experimental Works Comparing Copresent Robots, Telepresent Robots</article-title>
          and Virtual Agents,
          <source>International Journal of HumanComputer Studies</source>
          <volume>77</volume>
          (
          <year>2015</year>
          )
          <fpage>23</fpage>
          -
          <lpage>37</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>S.</given-names>
            <surname>Thellman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Silvervarg</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Gulz</surname>
          </string-name>
          , T. Ziemke,
          <article-title>Physical vs</article-title>
          .
          <source>Virtual Agent Embodiment and Efects on Social Interaction</source>
          , in: International conference on intelligent virtual agents, Springer,
          <year>2016</year>
          , pp.
          <fpage>412</fpage>
          -
          <lpage>415</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>