<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Chatbots' Greetings to Human-Computer Communication</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Maria Jo a˜o Pereira</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Lu´ısa Coheur</string-name>
          <email>luisa.coheur@inesc-id.pt</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Pedro Fialho</string-name>
          <email>alho@gmail.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Ricardo Ribeiro</string-name>
          <email>ricardo.ribeiro@inesc-id.pt</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>INESC-ID Lisboa</institution>
          ,
          <country country="US">USA</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>IST, Universidade de Lisboa</institution>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Instituto Universita ́rio de Lisboa (ISCTE-IUL) Rua Alvez Redol</institution>
          ,
          <addr-line>9, 1000-029 Lisboa</addr-line>
          ,
          <country country="PT">Portugal</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2018</year>
      </pub-date>
      <volume>511</volume>
      <fpage>24</fpage>
      <lpage>25</lpage>
      <abstract>
        <p>In the last years, chatbots have gained new attention, due to the interest showed by widely known personalities and companies. The concept is broad, and, in this paper we target the work developed by the (old) community that is typically associated with chatbot's competitions. In our opinion, they contribute with very interesting know-how, but specially with large-scale corpora, gathered by interactions with real people, an invaluable resource considering the renewed interest in Deep Nets.</p>
      </abstract>
      <kwd-group>
        <kwd>natural language interfaces</kwd>
        <kwd>agent-based interaction</kwd>
        <kwd>intelligent agents</kwd>
        <kwd>interaction design</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        Chatbots are currently a hot-topic, both for industry and
academia
        <xref ref-type="bibr" rid="ref13 ref7">(Dale, 2016; Følstad and Brandtzaeg, 2017)</xref>
        .
There are many platforms to help developing such systems,
and the number of new chatbots continues to increase at
a dizzying pace. Pandorabots hosting service1 declares to
have more than 225,000 botmasters (people in charge of
creating/maintaining a chatbot), which have built more than
280,000 chatbots, resulting in more than 3 billion
interactions (numbers collected in July 2017). On the academia
side, since 2016, at least four workshops were dedicated to
chatbots (defined as non goal-oriented dialogue systems),
which have been co-located with well-stablished
conferences2; also, several works point how chatbots could be
used to in learning environments (e.g.,
        <xref ref-type="bibr" rid="ref2">(Bibauw et al., 2019)</xref>
        and
        <xref ref-type="bibr" rid="ref12">(Fialho et al., 2013)</xref>
        ).
      </p>
      <p>Although the current definition of chatbot is broader that
the one we use in this paper3, we will use the word
“chatbot” to name the old school chatbots, typically associated
with chatbot’s competitions.</p>
      <p>
        We focus on chatbots that freely engage conversation about
any subject (the non goal-oriented feature), making them
“entertaining in a large variety of conversational topic
settings”
        <xref ref-type="bibr" rid="ref24">(Schumaker et al., 2007)</xref>
        . However, these are also
systems that “seek to mimic conversation rather than
understand it”, that is, there is no real intention of making
them “intelligent”, as the main goal of their developers is
to make these chatbots effective in their simulation of
intelligence. Some of these chatbots were developed and
tailored with the goal of participating in chatbot’s
competitions (in fact, the term chatbot was coined in
        <xref ref-type="bibr" rid="ref20">(Mauldin,
1994)</xref>
        to name the systems that have the goal of passing the
Turing Test
        <xref ref-type="bibr" rid="ref25">(Turing, 1950)</xref>
        ), and, due to that, some have
gained visibility. The lack of full descriptions and papers
about these chatbots (which explains the abnormal number
1www.pandorabots.com
2workshop.colips.org/wochat/
3Many terms are used as synonyms of chatbot, as for instance
dialogue system, avatar, intellectual agents, and virtual person.
A list of more than 160 terms used as synonyms of chatbot can be
found in www.chatbots.org/synonyms/.
of references to web pages in this paper) makes it difficult
to uncover the technology and the real possibilities behind
them. In this paper, we unveil the main contributions of
this community, as we believe that this line of work can
bring important insights to the human-machine
communication field, as some of them contribute with large amounts
of data gathered during their interactions with the crowd,
which could be used by current data-driven chatbots (e.g.,
        <xref ref-type="bibr" rid="ref18 ref26">(Li et al., 2016; Vinyals and Le, 2015)</xref>
        ). As we will see,
these chatbots range from “simpler” ones, based on
prewritten pattern-matching templates, exploiting large stores
of prepared small talk responses, to more complex
architectures, based on some sort of learning process. Finally, we
will see that concepts/tricks introduced by some chatbots
often result in a more solid contribution to the “illusion of
intelligence” than the involved models.4 This document is
organised as follows: in Section 2. we present a brief
historical overview, in Section 3. we discuss chatbot’s platforms
and how to enrich them, and, in Section 4., we summarise
the main “tricks” towards the “illusion of intelligence”.
Finally, in Section 5., we present some conclusions and point
to future challenges.
      </p>
      <p>2.</p>
    </sec>
    <sec id="sec-2">
      <title>Historical overview</title>
      <p>In this section we make a brief review of these chatbots’
history, moving from the first chatbots to the ones with
which we interact nowadays.
2.1.</p>
    </sec>
    <sec id="sec-3">
      <title>Early days</title>
      <p>
        Although the term chatbot was not invented by that
time, the first chatbot came to public in 1966 under
the appearance of a Rogerian psychotherapist called
Eliza
        <xref ref-type="bibr" rid="ref28">(Weizenbaum, 1966)</xref>
        . Eliza was a program
developed by Joseph Weizenbaum that was able to establish a
conversation with human beings, simulating it was one too.
Eliza’s conversational model was based in the rephrasing
of input sentences, when these matched a set of pre-defined
4An extended version of this paper can be found in https:
//arxiv.org/abs/1609.06479.
rules. For instance, consider the following rule5 constituted
by a regular expression (match) and an answer (answer):
match: * you are *
answer: What makes you think I am (2)?
      </p>
      <p>Example 1.</p>
      <p>In this rule, if the match part coincides with the input (*
is the wildcard and matches every sequence of words), the
text associated with the answer part will be returned,
being the variable (2) replaced by the sequence from the input
captured by the second wildcard. The following dialogue
(Example 2) illustrates an application of this rule. Notice
that some internal processing needs to be done, so that the
sequence captured by (2) entitled to your opinion is
modified into entitled to my opinion.</p>
      <p>user: You are entitled to your opinion.</p>
      <p>Eliza: What makes you think I am entitled
to my opinion?</p>
      <p>Example 2.</p>
      <p>
        Eliza completely exceeded the expectations, given that
many people, when interacting with it, believed they were
talking with another human (this outcome is currently
called the “Eliza effect”). Without having any intention
of modelling the human cognitive process and despite its
simplicity, Eliza showed how a program impersonating a
specific professional role can cause a huge impression by
the mere illusion of understanding. Weizenbaum was taken
aback by some aspects of this success
        <xref ref-type="bibr" rid="ref14">(Hutchens, 1997)</xref>
        .
What shocked him most was the fact that people actually
believed that the program understood their problems6.
Perceiving Eliza as a threat, Weizenbaum wrote “Computer
Power and Human Reason”
        <xref ref-type="bibr" rid="ref15">(Kuipers et al., 1976)</xref>
        with the
aim of attacking the Artificial Intelligence (AI) field and
educating uninformed persons about computers.
Nowadays, Eliza is still one of the most widely known
programs in AI and is at the base of a great number of
chatbots, including Parry, its “successor”. Following a
very similar architecture to that of Eliza, Parry appeared in
1971 by the hands of Kenneth Colby, simulating a paranoid
mental patient
        <xref ref-type="bibr" rid="ref22">(Saygin et al., 2000)</xref>
        . An interesting
comparison between Parry and Eliza was made by Gu¨zeldere
and Franchi7: “Parry’s strategy is somewhat the reverse
of Eliza’s”, as one simulates the doctor, distant and
without personality traces, and the other a paranoid patient
which states its anxieties. Differently from Eliza, Parry
has knowledge of the conversation and it also some sort
of “state of mind”. The combination of these two factors
affects the output as it becomes a function not only of the
input, but also of Parry’s beliefs, desires, and intentions.
In
        <xref ref-type="bibr" rid="ref20">(Mauldin, 1994)</xref>
        a few tricks to which Parry resorts are
summarised, namely: (1) admitting ignorance; (2)
changing the conversation topic; and, (3) introducing small
stories about the Mafia throughout the conversation. These
three tricks are (respectively) illustrated in the following
answers given by Parry:
      </p>
      <p>Parry: I don’t get you.
...</p>
      <p>Parry: Let’s talk about something else.
...</p>
      <p>Parry: I know the mob controls the big
rackets.</p>
      <p>Example 3.</p>
      <p>
        After Colby gathered transcripts of interviews between
psychiatrists, patients and his program, he presented the results
to another group of psychiatrists. He asked this group if
they could guess in what transcripts the interviewed was a
human and in which ones it was a program. The
psychiatrist could not do better than randomly guessing.
It is possible to conclude from these results that the
emotional side can be easier to imitate than the intellectual
one
        <xref ref-type="bibr" rid="ref15">(Kuipers et al., 1976)</xref>
        . However, one of the main
criticisms Parry received was of not being more than an
illusion, incapable of modelling a real person
        <xref ref-type="bibr" rid="ref6">(Colby, 1974)</xref>
        .
2.2.
      </p>
    </sec>
    <sec id="sec-4">
      <title>The chatbots’ competitions</title>
      <p>
        Moving back to 1950, Alan Turing questioned “can
machines think?”
        <xref ref-type="bibr" rid="ref25">(Turing, 1950)</xref>
        , and proposed a way of
testing it: the imitation game (now known as the Turing Test).
The original imitation game is played by a man, a woman
and an interrogator whose objective is to guess the sex of
the players. Turing proposed substituting one of the players
by a machine and playing the same game. In this version,
if the interrogator wrongly identifies who is the human it
means that the machine “can think”.
      </p>
      <p>
        Based on (their own interpretation of) the Turing Test,
chatbots’ competitions keep appearing. Chatterbox Challenge8,
or, more recently, the Chatbot Battles9, are examples of
such competitions, although the most widely known is the
Loebner prize10, where participants are challenged with a
simplified version of the total Turing Test
        <xref ref-type="bibr" rid="ref21">(Powers, 1998)</xref>
        .
This prize is due to Hugh Loebner, who offered a reward
to the first person whose program could pass the proposed
test. The first Loebner Prize Contest took place in 1991,
at Boston’s Computer Museum
        <xref ref-type="bibr" rid="ref9">(Epstein, 1992)</xref>
        , and, since
then, the competition has been held annually in the quest of
finding the “thinking computer”.
      </p>
      <p>As some chatbots, competing for the Loebner prize, are
indeed capable of managing a conversation, keeping it
consistent, at least for a while, every year the most human-like
computer is distinguished with a prize. However, since the
first edition of the Loebner prize, in 1991, until now, no one
won it. Nevertheless, in another Turing Test organised in
2014 by the U.K.’s University of Reading, a chatbot
simulating a 13-year-old boy, named Eugene Goostman, created
5Inspired from Eliza’s implementation in
search.cpan.org/˜jnolan/Chatbot-Eliza-1.04/
6www.alicebot.org/articles/wallace/eliza.html
7www.stanford.edu/group/SHR/4-2/text/dialogues.html
8web.archive.org/web/20150905221931/http://www.
chatterboxchallenge.com/
9www.chatbotbattles.com
10www.loebner.net/Prizef/loebner-prize.html
by Vladimir Veselov and his team, convinced 33% of the
human judges that it was human.</p>
      <p>
        This event brought to the spotlight the old question of AI
and generated (again) much controversy. In fact, many
people consider that there was a misunderstanding of Turing’s
intentions in the different implementations of the Turing
test, as deep models of thinking were a presupposition
underlying Turing’s imitation game. Following this, even if a
chatbot was good enough to deceive the jury, it would not
pass the Turing Test in Turing’s sense, as it does not have
a cognition model behind it. Another important criticism is
stressed by Levesque
        <xref ref-type="bibr" rid="ref17">(Levesque, 2014)</xref>
        . For this author, AI
is the science that studies “intelligent behaviour in
computational terms”, and the ability to be evasive, although
interesting, may not show real intelligence. A computer
program should be able to demonstrate its intelligence
without the need for being deceptive. In this sense, Levesque
et al.
        <xref ref-type="bibr" rid="ref16">(Levesque et al., 2012)</xref>
        further explore this idea by
conceiving a reading comprehension test based on binary
choice questions with specific properties that make them
less prone to approaches based on deception. Apart from
the numerous controversies regarding the Turing Test, the
fact is that all these competitions strongly contributed to the
main advances in the field, and the most popular chatbots
are the ones that were/are present in these competitions.
3.
      </p>
    </sec>
    <sec id="sec-5">
      <title>Building chatbots</title>
      <p>Behind each chatbot there is a development platform.
These are typically based on a scripting language that
allows the botmaster to handcraft its knowledge base, as well
as an engine capable of mapping the user’s utterances into
the most appropriate answer.</p>
    </sec>
    <sec id="sec-6">
      <title>3.1. Scripting languages/platforms</title>
      <p>
        An impressive collection of Elizas can be currently found
in the web. For instance, Chatbot-Eliza11 is an
implementation in Perl that can be used to build other chatbots.
Knowledge is coded as a set of rules that are triggered when
matched against the user’s input. Some of the available
programs offer features such as a certain capability to
memorise information, adding synonyms or ranking keywords.
The most popular language to build chatbots is probably the
“Artificial Intelligence Markup Language”, widely known
as AIML,12 a derivative of XML that includes specific tags.
As usual, knowledge is coded as a set of rules that will
match the user input, associated with templates, the
generators of the output. The large usage of AIML can be justified
by the following facts: besides its detailed specification, its
community allows anyone to obtain, for free, interpreters of
AIML in almost all coding languages, from Java (program
D) to C/C++ (program C) or even Lisp (program Z); the
set of AIML files that constitute the contents of A.l.i.c.e.’s
brain can also be freely obtained13. All the pandorabots are
based on AIML, more specifically in AIML 2.0. This
specific release is usually characterised as being very easy to
11search.cpan.org/˜jnolan/Chatbot-Eliza-1.04/Chatbot/
Eliza.pm
12www.alicebot.org/aiml.html
13code.google.com/p/aiml-en-us-foundation-alice/
downloads/list
modify, develop and deploy. Therefore, anyone, even
noncomputer-experts, can make use of it
        <xref ref-type="bibr" rid="ref27">(Wallace et al., 2007)</xref>
        ,
as no prior knowledge about AIML is required.
ChatScript14, the scripting language and open-source
engine, should also be addressed, as is at the basis of Suzette
(2010 Loebner Prize winner), Rosette (2011 Loebner Prize
winner), Angela (2nd in 2012 Loebner Prize), and the
previously referred Rose (2014 Loebner Prize winner). It comes
with useful features, including an ontology of nouns, verbs,
adjectives and adverbs, and offers a scripting language
(inspired by the Scone project, a knowledge-base system
developed to support human-like common-sense
reasoning and the understanding of human language
        <xref ref-type="bibr" rid="ref11">(Fahlman,
2011)</xref>
        ). According to Bruce Wilcox, its creator, ChatScript
settles several AIML problems, such as not being reader
friendly. In fact, as AIML is based on recursive
selfmodifying input, it is harder to debug and maintain. A
detailed comparison between ChatScript and AIML
capabilities was made available by Wilcox, as a motivation for the
development of a new (his own) chatbot platform.15
3.2.
      </p>
    </sec>
    <sec id="sec-7">
      <title>Building chatbots by chatting</title>
      <p>Another approach to develop chatbots’ knowledge sources,
which avoids handcrafted rules, is based on chatting and
learning from the resulting chats. Systems like the already
mentioned Jabberwacky (and Cleverbot) learn by keeping
never seen user interactions and posing them later to other
users. The acquired answers are then considered suitable
answers for these interactions. That is, they learn to talk
by talking, by relying on what has been said before by
users and mimicking them. The user’s intelligence becomes
“borrowed intelligence” as, instead of being wasted, it
incorporates a loop: what is said is kept (along with the
information of when it was said) and in the future that
knowledge may be exposed to another user. The given replies
are then saved as new responses that the system can give
in the future. It is only possible to give a brief overview of
Jabberwacky’s or Cleverbot learning mechanisms as their
architecture is not available to the public. The only
disclosed aspect is that the AI model is not one of the usually
found in other systems, but a “layered set of heuristics that
produce results through analyses of conversational context
and positive feedback”16.</p>
      <p>
        Another example of a chatbot that learns is Robby Garner’s
“Functional Response Emulation Device” (Fred), the
ancestor of Albert One, the winner of 1998 and 1999
Loebner Prize. Fred was a computer program that learned from
other people’s conversations in order to make its own
conversations
        <xref ref-type="bibr" rid="ref5">(Caputo et al., 1997)</xref>
        . Fred began with a library
of basic responses, so that it could interact with users, and
from then on, it learned new phrases with users willing to
teach it17. Although such an (unsupervised) learning may
lead to unexpected and undesirable results, with the Internet
growth and the possibility of having many people talking
14sourceforge.net/projects/chatscript/
15This comparison can be found in gamasutra.com/blogs/
BruceWilcox/20120104/9179/.
      </p>
      <p>16www.icogno.com/a_very_personal_entertainment.html
17www.simonlaven.com/fred.htm
with the chatbots, one may foresee that these will quickly
evolve.
4.</p>
    </sec>
    <sec id="sec-8">
      <title>The illusion of intelligence and/or the art of scripting</title>
      <p>Creating chatbots goes beyond writing good programs and
developing algorithms, as in order to create a chatbot, more
than being a programmer, the botmaster must be an author.
Juergen Pirner, creator of the 2003 Loebner prize winner
Jabberwock18, emphasises the scripting process behind a
chatbot, stating that in the presence of possible failures, the
one at fault is not the engine but its author.</p>
      <p>
        Since making a chatbot involves preparing it to the
impossible mission of giving a plausible answer to all possible
interactions, the botmasters usually take advantage of
several tricks to simulate understanding and intelligence. For
instance, Pirner describes basic techniques of scripted
dialogs like “having a set of responses for each scripted
dialog sequence” and “ending those same responses with a
clue, a funny remark or a wordplay”. With Eliza, we learnt
that including the user’s strings in its answers helps
maintaining an illusion of understanding
        <xref ref-type="bibr" rid="ref20">(Mauldin, 1994)</xref>
        . Other
approaches focus on trying to guess what the user might say
or forcing him/her to say something expected.
      </p>
    </sec>
    <sec id="sec-9">
      <title>4.1. Giving the bot a personality</title>
      <p>Whereas personality has been a subject of study among the
agent’s community, deeply exploited in all its complexity,
the concept is kept as simple as possible within chatbots.
As we have seen, what is common is the association of
an a priori “personality” to a chatbot, which can justify
some answers that otherwise would be considered
inappropriate. For instance, Rogerian mode of Eliza covers for its
answers, as it leads to a conversation where the program
never contradicts itself, never makes affirmations, and is
free to know nothing or little about the real world without
being suspicious. The same happens with Parry: being a
paranoid mental patient its changes in subject or
incongruous answers are considered satisfactory and hide its absence
of understanding. The aforementioned Eugene Goostman
also follows along these lines. Veselov explains his
reasoning for such a character: “a 13 years old is not too old to
know everything and not too young to know nothing”19.
Thomas Whalen, winner of 1994 Loebner prize, took this
a step further with Joe, the janitor. Whalen’s decision was
related to the fact that contrary to previous editions of
Loebner competitions, where the conversation was restricted to
a topic, in 1995 the judges could pose any question. Hence,
Whalen decided that the best approach to deal with a
nontopic situation, would be to present a system that “would
not simply try to answer questions, but would try to
incorporate a personality, a personal history, and a unique view
of the world”20. And so Joe was born. Joe was a
nightworker janitor in the verge of being fired. He was only
“marginally literate”, and he did not read books,
newspapers, or watch television. These premises by themselves
18www.chatbots.org/developer/juergen_pirner/
19www.huffingtonpost.com/2012/06/27/eugene-goostman2012-turing-test-winner_n_1630412.html
20hps.elte.hu/˜gk/Loebner/story95.htm
restricted the conversation by giving Joe a “fairly narrow
worldview”. Another trick was to use Joe’s eminent
dismissal to introduce some stories revolving around it, which
would, at the same time, provide a way of directing the
conversation, the topic of the next section.</p>
    </sec>
    <sec id="sec-10">
      <title>4.2. Directing a conversation</title>
      <p>Personality can justify some appropriate answers, but the
best way to deal with unexpected interactions is to avoid
them. Thus, being able to direct the conversation is a trick
used by many chatbots, including the simple forms used by
Eliza, where the usage of questions incited the user
participation and made him/her keep the conversation with little
contribution from the program.</p>
      <p>
        Converse (Batacharia et al., 1999), created by David Levy,
was the 1997 winner of the Loebner competition and did
extremely well by using the clever trick of controlling a
conversation. Although directing a conversation by
“talking a lot about a predefined topic” was already used
        <xref ref-type="bibr" rid="ref22">(Saygin
et al., 2000)</xref>
        , Converse’s performance convinced a judge for
the first five minutes that he was really human: after
greeting the judge, Catherine (Converse’s character) asked the
interrogator about something that had passed on the news
the previous day and then kept talking about it, as can be
seen in the transcripts21. David Levy’s won again the
Loebner prize in 2009 with Do-Much-More22, but this time the
system was more flexible in the range of topics and
responses it covered.
      </p>
    </sec>
    <sec id="sec-11">
      <title>4.3. Paying attention to small talk</title>
      <p>
        Small talk, or phatic communication
        <xref ref-type="bibr" rid="ref19">(Malinowski, 1923)</xref>
        ,
is another hot topic in chatbots. It can be seen as a “neutral,
non-task-oriented conversation about safe topics, where no
specific goals needs to be achieved”
        <xref ref-type="bibr" rid="ref8">(Endrass et al., 2011)</xref>
        .
Small talk can be used for two main purposes
        <xref ref-type="bibr" rid="ref23">(Schneider,
1988)</xref>
        : establish a social relation by building rapport and
avoiding (embarrassing) silence. As stated in
        <xref ref-type="bibr" rid="ref3">(Bickmore
and Cassell, 1999)</xref>
        , chatbots have been making use of the
small talk mechanism. For instance, Epstein, an American
psychologist, professor, author, and journalist, went to an
online dating service, and believed for several months that
a chatbot, met in the dating service, was a “slim, attractive
brunette”
        <xref ref-type="bibr" rid="ref10">(Epstein, 2007)</xref>
        . In brief, small talk is a constant
in all chatbots programs, used in non-sequiturs or canned
responses. It fosters the idea of understanding and eases
cooperation, facilitating human-like interactions by gaining
user’s trust and developing a social relationship
        <xref ref-type="bibr" rid="ref4">(Bickmore
and Cassell, 2000)</xref>
        .
      </p>
    </sec>
    <sec id="sec-12">
      <title>4.4. Failing like a human</title>
      <p>
        After introducing the imitation game, Turing presented an
example (Example 4) of a possible conversation one could
have with a machine
        <xref ref-type="bibr" rid="ref25">(Turing, 1950)</xref>
        . Observing this
example, besides the delay in providing the response, we can
easily see that the answer is wrong. As Wallace wrote23,
“we tend to think of a computer’s replies ought to be fast,
accurate, concise and above all truthful”. However, human
21www.loebner.net/Prizef/converse.txt
22www.worldsbestchatbot.com/
23www.alicebot.org/anatomy.html
communication is not like that, containing errors,
misunderstandings, disfluencies, rephrases, etc.
      </p>
      <p>
        This is something that earlier chatbot’s writers already had
in mind, as some already cared about simulated typing. For
instance, Julia
        <xref ref-type="bibr" rid="ref20">(Mauldin, 1994)</xref>
        simulated human typing by
including delays and leaving some errors. Simulated typing
also proves to be useful in decreasing mistakes by slowing
down the interaction: Philip Maymin, a Loebner contestant
in 1995, slowed the typing speed of his program to the point
that a judge was not able to pose more than one or two
questions
        <xref ref-type="bibr" rid="ref14">(Hutchens, 1997)</xref>
        .
      </p>
    </sec>
    <sec id="sec-13">
      <title>Conclusions and future challenges</title>
      <p>The number of chatbots that can be found in the web
increases every day. Besides tools and corpora, the chatbots’
community has important know-how, which should not
be neglected by researchers targeting advances in
humanmachine communication. Therefore, we present a brief
historical overview of chatbots and describe main resources
and ideas. Furthermore, we highlight some chatbots,
relevant because they introduce new paradigms and/or won the
Loebner prize. However, it should be clear that these are
only the tip of the iceberg of the panoply of chatbots that
currently exist.</p>
      <p>We have seen that AIML and, more recently, Chatscript
are widely used languages that allow to code the chatbots’
knowledge sources, and that even in chatbots that
implement learning strategies, scripting is still at their core. We
have also seen that a personality capable of justifying some
of the chatbot’s answers, the capacity of directing a
conversation and producing small talk, and the idea of failing like
a human are some of the chatbots’ features that give the
illusion of intelligence. We have also grasped that to create
a chatbot, one “only” needs to think about a character and
enrich its knowledge bases with possible interactions. Even
better, that work does not need to be done from scratch as
many platforms already provide pre-defined interactions,
which can be adapted according to the chatbot character.
And this is the main richness of the chatbot’s community:
the immense amount of collected interactions, where the
majority of them represent real human requests. All this
data (after some validation) could be used to train current
end-to-end data-driven systems. A major future challenge
is to be able to automatically use all this information to
build a credible chatbot. How to avoid contradictory
answers? How to choose appropriated answers considering a
chatbot’s character? And if we move to other sources of
dialogues, like the ones from books, theatre plays or movies
subtitles, will we be able, one day, to integrate all that
information simulating real human dialogues?</p>
    </sec>
    <sec id="sec-14">
      <title>Bibliographical References</title>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          Springer International Series in Engineering and Computer Science, pages
          <fpage>205</fpage>
          -
          <lpage>215</lpage>
          . Springer.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          <string-name>
            <surname>Bibauw</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          , FranA˜ xois, T., and
          <string-name>
            <surname>Desmet</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          (
          <year>2019</year>
          ).
          <article-title>Discussing with a computer to practice a foreign language: research synthesis and conceptual framework of dialogue-based call</article-title>
          .
          <source>Computer Assisted Language Learning</source>
          ,
          <volume>0</volume>
          (
          <issue>0</issue>
          ):
          <fpage>1</fpage>
          -
          <lpage>51</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          <string-name>
            <surname>Bickmore</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Cassell</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          (
          <year>1999</year>
          ).
          <article-title>Small talk and conversational storytelling in embodied conversational interface agents</article-title>
          .
          <source>In Proc. of the AAAI 1999 Fall Symposium on Narrative Intelligence</source>
          , pages
          <fpage>87</fpage>
          -
          <lpage>92</lpage>
          . AAAI Press.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          <string-name>
            <surname>Bickmore</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Cassell</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          (
          <year>2000</year>
          ).
          <article-title>How about this Weather? Social Dialogue with Embodied Conversational Agents</article-title>
          .
          <source>In Socially Intelligent Agents: The Human in the Loop</source>
          , pages
          <fpage>4</fpage>
          -
          <lpage>8</lpage>
          . AAAI Press.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          <string-name>
            <surname>Caputo</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Garner</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Nathan</surname>
            ,
            <given-names>P. X.</given-names>
          </string-name>
          (
          <year>1997</year>
          ).
          <article-title>FRED, Milton and Barry: the evolution of intelligent agents for the Web</article-title>
          . In F. C. Morabito, editor,
          <source>Advances in Intelligent Systems</source>
          , pages
          <fpage>400</fpage>
          -
          <lpage>407</lpage>
          . IOS Press.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          <string-name>
            <surname>Colby</surname>
            ,
            <given-names>K. M.</given-names>
          </string-name>
          (
          <year>1974</year>
          ).
          <article-title>Ten criticisms of PARRY</article-title>
          .
          <source>SIGART Newsletter</source>
          , pages
          <fpage>5</fpage>
          -
          <lpage>9</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          <string-name>
            <surname>Dale</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          (
          <year>2016</year>
          ).
          <article-title>The return of the chatbots</article-title>
          .
          <source>Natural Language Engineering</source>
          ,
          <volume>22</volume>
          (
          <issue>5</issue>
          ):
          <fpage>811</fpage>
          -
          <lpage>817</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          <string-name>
            <surname>Endrass</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rehm</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          , and Andre´,
          <string-name>
            <surname>E.</surname>
          </string-name>
          (
          <year>2011</year>
          ).
          <article-title>Planning Small Talk behavior with cultural influences for multiagent systems</article-title>
          .
          <source>Computer Speech &amp; Language</source>
          ,
          <volume>25</volume>
          (
          <issue>2</issue>
          ):
          <fpage>158</fpage>
          -
          <lpage>174</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          <string-name>
            <surname>Epstein</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          (
          <year>1992</year>
          ).
          <article-title>The Quest for the Thinking Computer</article-title>
          .
          <source>AI Magazine</source>
          , pages
          <fpage>81</fpage>
          -
          <lpage>95</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          <string-name>
            <surname>Epstein</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          (
          <year>2007</year>
          ).
          <article-title>From Russia, with Love. How I got fooled (and somewhat humiliated) by a computer</article-title>
          . Scientific American Mind.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          <string-name>
            <surname>Fahlman</surname>
            ,
            <given-names>S. E.</given-names>
          </string-name>
          (
          <year>2011</year>
          ).
          <article-title>Using Scone's Multiple-Context Mechanism to Emulate Human-Like Reasoning</article-title>
          .
          <source>In Advances in Cognitive Systems: Papers from the 2011 AAAI Fall Symposium</source>
          , pages
          <fpage>98</fpage>
          -
          <lpage>105</lpage>
          . AAAI Press.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          <string-name>
            <surname>Fialho</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Coheur</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Curto</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          , Cla´udio, P., Aˆngela Costa, Abad,
          <string-name>
            <given-names>A.</given-names>
            ,
            <surname>Meinedo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            , and
            <surname>Trancoso</surname>
          </string-name>
          ,
          <string-name>
            <surname>I.</surname>
          </string-name>
          (
          <year>2013</year>
          ).
          <article-title>Meet Edgar, a tutoring agent at Monserrate</article-title>
          .
          <source>In Proc. of the 51st Annual Meeting of the ACL: System Demonstrations</source>
          , pages
          <fpage>61</fpage>
          -
          <lpage>66</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          <string-name>
            <surname>Følstad</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Brandtzaeg</surname>
            ,
            <given-names>P. B.</given-names>
          </string-name>
          (
          <year>2017</year>
          ).
          <source>Chatbots and the New World of HCI. Interactions</source>
          ,
          <volume>24</volume>
          (
          <issue>4</issue>
          ):
          <fpage>38</fpage>
          -
          <lpage>42</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          <string-name>
            <surname>Hutchens</surname>
            ,
            <given-names>J. L.</given-names>
          </string-name>
          (
          <year>1997</year>
          ).
          <article-title>How to Pass the Turing Test by Cheating</article-title>
          .
          <source>Technical report, Univ. of Western Australia.</source>
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          <string-name>
            <surname>Kuipers</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>McCarthy</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Weizenbaum</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          (
          <year>1976</year>
          ).
          <article-title>Computer power and human reason</article-title>
          .
          <source>SIGART Bull.</source>
          , pages
          <fpage>4</fpage>
          -
          <lpage>13</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          <string-name>
            <surname>Levesque</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Davis</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Morgenstern</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          (
          <year>2012</year>
          ).
          <article-title>The Winograd Schema Challenge</article-title>
          .
          <source>In Proc. of the Thirteenth International Conf. on Principles of Knowledge Representation and Reasoning</source>
          , pages
          <fpage>552</fpage>
          -
          <lpage>561</lpage>
          . AAAI.
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          <string-name>
            <surname>Levesque</surname>
            ,
            <given-names>H. J.</given-names>
          </string-name>
          (
          <year>2014</year>
          ).
          <article-title>On our best behaviour</article-title>
          .
          <source>Artificial Intelligence</source>
          ,
          <volume>212</volume>
          :
          <fpage>27</fpage>
          -
          <lpage>35</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          <string-name>
            <surname>Li</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Galley</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Brockett</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gao</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Dolan</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          (
          <year>2016</year>
          ).
          <article-title>A diversity-promoting objective function for neural conversation models</article-title>
          .
          <source>In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies</source>
          , pages
          <fpage>110</fpage>
          -
          <lpage>119</lpage>
          . Association for Computational Linguistics.
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          <string-name>
            <surname>Malinowski</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          , (
          <year>1923</year>
          ).
          <article-title>The Meaning of Meaning, chapter The Problem of Meaning in Primitive Socities, page 38</article-title>
          . Harcourt Brace Jovanovich, Inc.
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          <string-name>
            <surname>Mauldin</surname>
            ,
            <given-names>M. L.</given-names>
          </string-name>
          (
          <year>1994</year>
          ).
          <article-title>ChatterBots, TinyMuds, and the Turing test: entering the Loebner Prize competition</article-title>
          .
          <source>In Proc. of the 12th National Conference on Artificial Intelligence (vol. 1)</source>
          ,
          <source>AAAI '94</source>
          , pages
          <fpage>16</fpage>
          -
          <lpage>21</lpage>
          . AAAI Press.
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          <string-name>
            <surname>Powers</surname>
            ,
            <given-names>D. M. W.</given-names>
          </string-name>
          (
          <year>1998</year>
          ).
          <article-title>The total Turing test and the Loebner prize</article-title>
          .
          <source>In Proc. of the Joint Conf. on New Methods in Language Processing and Comp. Natural Language Learning</source>
          , NeMLaP3/CoNLL '98, pages
          <fpage>279</fpage>
          -
          <lpage>280</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          <string-name>
            <surname>Saygin</surname>
            ,
            <given-names>A. P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cicekli</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Akman</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          (
          <year>2000</year>
          ).
          <article-title>Turing test: 50 years later</article-title>
          .
          <source>Minds and Machines</source>
          ,
          <volume>10</volume>
          :
          <year>2000</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          <string-name>
            <surname>Schneider</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          (
          <year>1988</year>
          ).
          <article-title>Small Talk: Analyzing Phatic Discourse</article-title>
          . Sprachwissenschaftliche Reihe. Hitzeroth.
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          <string-name>
            <surname>Schumaker</surname>
            ,
            <given-names>R. P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ginsburg</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chen</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          , and Liu,
          <string-name>
            <surname>Y.</surname>
          </string-name>
          (
          <year>2007</year>
          ).
          <article-title>An evaluation of the chat and knowledge delivery components of a low-level dialog system: The AZ-ALICE experiment</article-title>
          .
          <source>Decision Support Systems</source>
          ,
          <volume>42</volume>
          (
          <issue>4</issue>
          ):
          <fpage>2236</fpage>
          -
          <lpage>2246</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          <string-name>
            <surname>Turing</surname>
            ,
            <given-names>A. M.</given-names>
          </string-name>
          (
          <year>1950</year>
          ).
          <source>Computing Machinery and Intelligence. Mind</source>
          ,
          <volume>59</volume>
          :
          <fpage>433</fpage>
          -
          <lpage>460</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          <string-name>
            <surname>Vinyals</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Le</surname>
            ,
            <given-names>Q.</given-names>
          </string-name>
          (
          <year>2015</year>
          ).
          <article-title>A neural conversational model</article-title>
          .
          <source>arXiv preprint arXiv:1506</source>
          .
          <fpage>05869</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          <string-name>
            <surname>Wallace</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tomabechi</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Aimless</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          (
          <year>2007</year>
          ).
          <article-title>Chatterbots Go Native: Considerations for an eco-system fostering the development of artificial life forms in a human world</article-title>
          . http://www.pandorabots.com/pandora/pics/ chatterbotsgonative.doc.
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          <string-name>
            <surname>Weizenbaum</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          (
          <year>1966</year>
          ).
          <article-title>ELIZA - a computer program for the study of natural language communication between man and machine</article-title>
          .
          <source>Comm. of the ACM</source>
          ,
          <volume>9</volume>
          :
          <fpage>36</fpage>
          -
          <lpage>45</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>