=Paper= {{Paper |id=Vol-2400/paper-24 |storemode=property |title=Data Driven Chatbots: A New Approach to Conversational Applications |pdfUrl=https://ceur-ws.org/Vol-2400/paper-24.pdf |volume=Vol-2400 |authors=Nicoletta Di Blas,Luca Lodi,Paolo Paolini,Barbara Pernici,Fabrizio Renzi,Donya Rooein |dblpUrl=https://dblp.org/rec/conf/sebd/BlasLPPRR19 }} ==Data Driven Chatbots: A New Approach to Conversational Applications== https://ceur-ws.org/Vol-2400/paper-24.pdf
           Data Driven Chatbots: a New Approach to
                 Conversational Applications

Nicoletta Di Blas1, Luca Lodi1, Paolo Paolini1, Barbara Pernici1, Fabrizio Renzi2 , and
                                   Donya Rooein1
                        1 Politecnico di Milano (DEIB), Milan, Italy
                                   2 IBM Research, Italy




   Abstract. Chatbots and conversational interfaces are becoming ubiquitous and a
new HCI paradigm to access applications and information. This paper proposes a
novel approach and an innovative technology (iCHAT) for the development of “data-
driven chatbots”. Key ingredients are “meta-conversation”, “conversation tables”
(controlling the interface), and a “conversation engine”.
   Several advantages are envisioned: (i) lower effort for developing new conversa-
tional applications; (ii) easiness of maintenance and update, and therefore improved
quality; (iii) possibility for content experts of developing conversational applications,
without the need of ICT experts. The approach of iCHAT is quite general and it can
be applied to all domains. It is being tested by developing a conversational tutor, sup-
porting adaptive learning, for two MOOCs developed at Politecnico di Milano (Italy)
in the frame of an EIT program. This paper describes the overall architecture of
iCHAT and analyzes its most original aspects: the conversation engine and the con-
versation tables.

Keywords: Chatbots, Data-driven application development, Educational framework.


1      Introduction and Background

   A new paradigm for interfaces is becoming ubiquitous: conversational interfaces
[1]. Conversations, today, are available via different technologies1 and different
software solutions. Users, especially youngsters, are getting ever more used to con-
versations as a way to interact with applications. Conversations can be supported by
several devices:π computers, phones, home devices (e.g. “Echo”), etc. They can be
delivered via text interfaces or audio interfaces (using speech-to-text and/or text-to-
speech technologies). The baseline is . Current technologies, such as WATSON by IBM, support quite
well the development of specific conversations, by designing a conversation flows.
    This paper introduces a new paradigm for designing conversational interfaces,
based on two core ideas: A) designing and implementing chatbots as “data-driven”,
i.e. controlled via Data Bases and tables; B) using chatbots as interfaces for adap-
tive information intensive applications. These application allow users to access
organized streams of information items.
    Conversational interfaces may offer: (i) easiness of use, especially for the younger
ones; (ii) lower level of attention required, letting the user focus upon content; (iii)
empathic (if not entertaining) interaction, thus improving the user satisfaction; (iv)
encouragement to the users to express themselves; (v) possibility to collect “non-
functional feedback” (e.g. emotions or feelings) as the interaction process evolves;
(vi) possibility to react to unforeseen situations (e.g. to a perceived distress by the
learner), etc.
    Some important research questions arise: A) which functionalities should be sup-
ported? B) how should the production process be organized? C) which technology
should be used to support “A” and “B”?
    Question A is about Adaptativity: what does it mean, from a user point of view, to
access adaptively a set of information items? We started investigating Adaptativity in
the domain of Education, striving for an adaptive use of material offered by a MOOC.
Our approach is exemplified (in the realm of education) by iMOOC, i.e. a methodolo-
gy for developing and organizing online courses [2]. iMOOC is based upon the as-
sumption that there is a “body of knowledge” represented by a (large or very large)
set of content items. Items should be properly tagged and enriched with metadata
(e.g. level of difficulty, time available, type of proffered content, …), in order to make
it possible adaptivity. Items are offered to a user arranged into “learning paths”, i.e.
predefined topologies used for traversing the “corpus” of content. Each learning path
corresponds to an “adaptive answer” to a learning need (according to various crite-
ria, as it will be explained next). Politecnico di Milano has used iMOOC in order to
develop two online courses “Recommender Systems Basic” and “Recommender Sys-
tems Advanced.”, that will be available on Coursera by Summer 2019. They are part
of a master of the European Institute of Technology (EIT). The first course consists
of 43 items, for a total of 110 minutes of videos; the second course consists 31 items,
for a total of 106 minutes of videos.
    Question B is about feasibility and sustainability of adaptive conversational ap-
plications. The production process must be reasonably efficient, both from an author
and a publisher point of view. If the development of a chatbot requires excessive
effort by the author, it will not be afforded. We are discussing with major publishers
about this issue, but in this paper we only address it indirectly, while describing the
technology.
    Question C is about technology, but it has a clear impact on “A” (what the chat-
bot does), and “B” (how the chatbot is designed, implemented and maintained). To
address this question, at Politecnico di Milano (in cooperation with IBM Research
Italy) we are developing iCHAT [3]. In this paper we are specifically concerned with
a portion of the technology: how to make conversations data-driven, instead of
being explicitly modelled for each application.
   In Section 2 we briefly discuss the functionalities of chatbots (for the domain of
education) and the overall architecture of iCHAT. In Section 3, we discuss the main
theme of this paper: how to use a data-driven approach to design and implement a
conversational chatbot. In Section 4 we discuss the state of art. In Section 5 we draw
conclusions and sketch future work.


2      iCHAT Architecture for Educational Chatbots
The overall conceptual architecture of iCHAT is shown in Figure 1.




Figure 1: The conceptual architecture of iCHAT

   iCHAT entails three main engines: the conversation engine, the interpretation
engine, and the transition engine. Since the conversation is the main subject of this
paper, for the sake of completeness, we briefly here the other two engines.
   The role of the transition engine is to control the progress on a learning path (or a
pathway of content items in general). When a proper request is received, the transition
engine selects the next content item for the user. The transition engine is driven via
the “content DB”, consisting of the content items (with their tags and metadata) and
the pathways (again with their tags and metadata).
   The role of the interpretation engine is to decide, using the session-variables col-
lected up to a certain moment, how the conversation should move on. A conversation
could be “completed” (the pathway has been fully covered), “paused” (for a later
resumption), “halted” (with no resumption possible) or “continued”. In the latter
case, a proper request is formulated to the transition engine. The interpretation engine
is controlled via a set of rules called “interpretation tables”.
   The role of the conversation engine is to keep effective and efficient the conversa-
tion between the user and the chatbot. Users have their turns, and the chatbot has its
turns. The most important turns of the chatbot, (at least from a learning point of
view) are those in which content items are offered for “consumption”. In our ap-
proach, the chatbot is not an answering machine but rather a proactive tutor. The
chatbot has a pathway of content2 to cover, and a successful conversation is the one
leading the user to the completion of the pathway. This is a why the chatbot has to be
“empathic” and “persuasive”: the appropriateness of its turns and the quality of
the wording are very important.
   The chatbot is not an expert of a domain, though it may seem to be to the user.
The chatbot, in fact, knows about the content items (title, abstract, metadata, tags, …),
knows about the pathways (content, topology, metadata, …), knows what the user has
done up to a certain point and what still needs to be covered, but it does not actually
understand the domain3. We can say that the chatbot knows a specific “corpus of con-
tent” but not the subject of the course.
   Adaptativity of the learning experience comes in tow fashions. Content Adapta-
tivity: only suitable content items are offered to the user, and with a suitable organiza-
tion. This can be very important, for example, for MOOCs [2, 3] or professional
training on the job, where there is a large variety of learners, with different learning
goals, different backgrounds, different degrees of possible efforts, etc. Conversation
Adaptativity: the conversation is tuned to the user profile and the specific needs.
Different elements can be “adapted”: the number of the turns of the chatbot, the sub-
ject of those turns, the length of these turns, the wording used by the chatbots, etc. In
the next section we focus upon the issue of controlling the Adaptativity of the chatbot.


3        Controlling the conversation of a chatbot: a data-driven
         approach
   In this paper, we mainly examine a data-driven approach, in order to control the
conversation of a chatbot speaks. Figure 2 shows a sample of a conversation (from the
course “Recommender Systems Basic”).
   Most chatbots, currently, could be considered as “glorified answering machines”:
the user takes the lead and formulates requests, the chatbot answers in a conversation-
al fashion (that may take several turns).
   For iCHAT, instead, the approach is quite different, since the chatbot is somehow
leading the conversation. A conversation goes on as a sequence of “turns” by the
chatbot and the user. The turns are not necessarily alternated: the chatbot and the user
can take some turns in a sequence4. Both for the user and the chatbot there are solicit-
ed turns (replying to a request) and unsolicited turns (autonomous moves). In line 3,
for example, the user is replying to a request by the chatbot; in line 7 the user makes
an unsolicited comment. In line 10 the chatbot replies to a request by the user [4].

2 The selection of the proper pathway (among a set of possible pathways on the same content)

    is not part of the chatbot and it falls outside of the scope of this paper.
3 And this is different from other approaches aiming at building “smart chatbots”.
4 This creates the problem of avoiding overlapping turns, that we can’t discuss here.
   1 Chatbot: so far you have covered 6 items, for a total of 13 minutes.
   2 Chatbot: can we move on or do you feel tired?
   3 User: please, go on
   4 Chatbot: next item is of medium difficulty and it is recommended by the teach-
er. The title is URM matrix and it takes 2.5 minutes.
   5 Chatbot: would you like an abstract of it?
   6 User: no, move on
   … 
   7 User: I found it relatively easy
   8 Chatbot: so far you are doing well; congratulations….
   9 User: how much do we have to cover still to complete this learning path?
   10 Chatbot: there are 9 items, for a total of 17 minutes. 4 items are advanced
   ….
Figure 2: fragment of a possible conversation for the course “Recommender Systems, Basic”

These are the main elements of our approach
 The chatbot has an “agenda” driven by content. In this example, the agenda is a
    learning path to be covered [3]; in another application (being developed at
    Politecnico di Milano) [4] the agenda could be a set of activities to be performed
    by children with special needs.
 The agenda implies that the chatbot helps the user to go through a set of con-
    tent items (arranged in a possibly complex topology). The purpose of the chatbot
    is therefore to “persuade” the user to access (flexibly and adaptively) the maxi-
    mum number of items (keeping following the indications of the author) and also
    to “support” this consumption of content.
 As a consequence of the above, the chatbot is quite proactive. It takes the lead
    supporting the user in an adaptive course, and also listening to possible specific
    requests.
 The user may “interrupt” the chatbot for different reasons: making comments or
    expressing feelings (e.g. line 7), requesting additional info (e.g. line 9) or request-
    ing (directly or indirectly) to “suspend” the conversation or to “stop it”.
 The fact that the user can express her feeling suggests a number of interesting
    developments: (i) data may suggest another course of action (e.g. accelerating or
    using a different conversation style); (ii) “non-functional data analytics” (i.e. how
    the user reacts to interactions with the chatbot) can be harvested and used later to
    improve content and its organization5; (iii) data may suggest ways to improve the
    conversation styles of the chatbot.


3.1    Chatbots are “data driven”
  The most outstanding feature of iCHAT chatbots is the way they are modelled and
implemented, justifying the overall label of “data driven”.

5 This would be a sharp improvement over what Learning Management Systems do: they log

  the actions of the user, but they do not know why the user is performing those actions.
    Chatbot technology is currently mainly based upon conversation modelling. Using,
for example, WATSON conversation features [5], or a similar platform, the main task
is to identify possible inquiries by the user (called “user intents”): they are the possi-
ble subjects of the conversation. For each subject, possible continuations of the con-
versation are identified. The result, overall, is a tree-structure6 representing the possi-
ble evolution of the conversation7. It is therefore clear that modelling an application
consists into modelling a conversation: data are accessed when needed by the conver-
sation. A negative effect is that for each different problem the conversation must be
modeled and implemented: specialists are needed; time is required; costs are high;
maintenance is difficult. The goal of iCHAT is to make chatbot production sustain-
able, lowering costs and not requiring extensive intervention by ICT specialists. If a
new conversation is needed, the authors or conversation specialists should be able to
generate it, without modelling it explicitly, which is a job for ICT specialists. This is
the essence, as far as conversation is concerned, of what we call being “data driven”.
The approach of iCHAT is drastically different from the above described conversation
modeling.




Figure 3 A simplified conceptual representation of the meta-conversation controlling the turns
of the chatbot

These are the main features of iCHAT for controlling the chatbot:
a) There is a unique “meta-conversation” model, a simplified version of which is
shown in Figure 3. The meta-conversation is content independent and therefore can be
used for various applications.
b) Different arcs of the meta conversation corresponds to different states of a “con-
versation machine”. For each “arc” we have defined the possible turns of the chatbot,
creating “conversation tables”


6 Additional mechanism (like “activation rules” or “context variables”) allow to deliver conver-

   sations that do not appear hierarchical to the user.
7 More sophisticated, but in essence not dissimilar from hierarchical menu of call-answering

   software.
c) The turns are purely conversational, and they are totally unrelated to the subject of
the conversation. If reference to a specific “corpus of content” is needed8, this is ob-
tained by using general variables and accessing the content DB.
d) The turns of the chatbot may belong to one out of 11 different categories. Consid-
ering the conversation of the example of Figure 2 we use categories like “summary”
(line 1), “preview” (lines 4), “proactivity” (lines 2, 5), “reinforcement” (line 8), “fore-
cast” (line 10). For each category we specify: (i) What the chatbot may say (i) The
“rule”: under which condition it will say it.
e) Users are assigned a “conversational profile”. Using a number of variables (as time
available, level of background, …), each user falls into a “stereotype”. With stereo-
type 1, for example, the chatbot keeps the conversation to a minimum (less turns,
fewer categories, short formulations). With stereotype 3, the chatbot formulates an
extended conversation. Stereotype 2 falls in between.
f) In addition to the “standard” formulation there are alternative formulations for each
possible turn, which used to make the chatbot more “human”. E.g. line 2 of Figure 2,
could be formulated like this: “Now we can move to the next item”.
g) Whether a chatbot, in a given situation, will say “x” depends on 3 factors: (i) is “x”
appropriate in this situation? (ii) is “x” appropriate for the user profile? (iii) is the
current turn “far enough” form the last turn when “x” was uttered? Let us consider,
for example. line 8 of Figure 2; it is a reinforcement message, it would sound unnatu-
ral (and boring) to repeat it every time that the user gets a content item.
h) In order to make the chatbot “content aware”, the turns of the chatbot may embed
“templates” and “variables”. A turn, for example could be (as in line 4 of Figure 2)
“Next item is ”. The template “Short item description” is
defined in a separate table, as a pattern of words incorporating a number of variable
values. Another example is shown at line 10 of Figure 2: the chatbot can speak about
what is coming next. Example of variables are: item_title, item_abstract, item_length,
etc.
i) Technically speaking, the meta-conversation is content independent, but users may
have a different perception. From their point of view, in fact, the chatbot knows the
material being covered; (i) the overall organization of the material; (ii) the title and
the description of chapters and sections; (iii) the title and the description of each item
(including length, level of difficulty, relevance, …).

Figure 4 shows a simplified sample of “setting” for Arc 3.1, the most important one.
(there more than 60 settings for this arc, in reality).

Conversation tables, such as the one shown in Fig. 4, allow relevant results:
 Creating and delivering a chatbot tutor for a different course, would require only
to create a different content Database.
 Changing the “wording” of the various turns, would require only modifying tables
like the one shown in Figure 4, without the support of ICT experts.
 Modifying the turns of the chatbot, again needs to modify the conversation table,
without modelling the conversation again.


8 E.g. the title of an Item or its description.
 Changing discipline (e.g. a course in Humanities) would imply also a (possibly
minor) revision of the conversation tables, with no need of ICT experts.



   CATEGORY                Basic formulation       Alternative 1          Alternative 2
   ASSESSMENT
Stereotype 1: every M    What do you think     How is this learn-    What is your opin-
turns                 of this learning ses- ing session going?    ion on the this learn-
                      sion, so far?                               ing session?
Stereotype 3: every N    ,    , are , I'd like,
turns                 could you rate the you happy with what again, to know your
                      learning session that you are learning, so opinion on this learn-
                      you have followed so far?                   ing session. Does it
                      far? What do you                            seem useful to you?
                      think of it?
Figure 4a A simplified representation of some turns for assessment for ARC3.1

   CATEGORY               Basic formulation     Alternative 1          Alternative 2
   PREVIEW
Before proposing next     Next item is .                to be     the next item: .


   Before proposing     Next item is .    description long>.  the next item: .
ways
Figure 4b A simplified representation of some turns for preview for ARC3.1



CATEGORY                Basic formulation       Alternative 1          Alternative 2
REINFORCEMENT
Stereotype 1: never     , you        You are doing great!   It seems that you are
Stereotype 2: every M   are progressing well.                          doing well, congratu-
turns                   Good job!                                      lations!

Stereotype 3: every N , you are        You are doing great!   It seems that you are
(turns                progressing well.                                doing well, congratu-
                      Good job!                                        lations!
Figure 4c A simplified representation of some turns for reinforcement for ARC3.1 of Fig. 3

   In the next section we briefly analyze the relevant state of art, while in section 5 we
sketch the future direction of research.
4       State of the art
   Various market technologies are available for the creation of chatbots, almost all
sharing a similar approach to the design. Most of the technologies that used in the
chatbots are pattern matching techniques and language tricks [6]. IBM Watson Assis-
tants, for example, allows the creation of customized chatbots9 modeling the conver-
sation as a tree of dialogue nodes, where each node associates a request by the user
with a set of replies by the chabot. The use of “context variables” and of “rules” (gov-
erning when a reply is appropriate or not) makes the chatbot quite flexible and power-
ful. Still, two main drawbacks remain.
   The first drawback is that chatbots are conceived (and perceived) as “answering
machines” [7]. The emphasis is on understanding the user’s input, by classifying the
user’s will in a series of “intents”. An intent is defined by several messages’ exam-
ples. Thus, the most common use cases for this kind of technologies are ques-
tion/answering and the automation of business tasks. The purpose of the work pre-
sented in this paper is instead to build chatbots that can use conversations as a way to
support the learning process, as already Carbonell, in 1970 [8], meant to do when he
created a Socratic tutor using a semantic network technique. Since the seventies, a
number of different efforts have been made in this direction and Pedagogical agents,
Educational agents, Learning companions, Virtual Teaching Assistants, Intelligent
Tutoring Systems have been designed and deployed. Though not all can be called
“chatbots” in strict sense, they all show how the dream of supporting interactive
learning has been persistent and resilient [9, 10].
    Pedagogical agents (PAs), or Educational Agents, are “lifelike characters in virtual
environments that help facilitate learning through social interactions and the virtual
real relationships with the learners” [11, 12]. They can be seen as “computer-
simulated character, which presents users with human-like characteristics, such as
domain competence, emotions, and other personal characteristics” [13]; they can go
from making a presentation engaging to interacting on a topic [14]. In contrast to PAs,
chatbots provide an interacting interface in a synchronous way with learners to react
on individual intents [15]. Learning companions can be seen as “animated digital
characters functioning to simulate human-peer-like interaction—might provide an
opportunity to simulate such social interaction in computer-based learning” [13, 16].
Virtual Teaching Assistants support teacher in delivering courses (e.g. a programming
course: Chou et al [17]). Intelligent Tutoring Systems (ITSs) are “virtual teachers”
that can be used for one-to-one tutoring [18-20]. Technology-mediated learning
(TML) is an environment to provide learning materials in an interactive way for stu-
dents [21]. In TML during the learning process students impacted by structural factors
such as learning methods [22]. Nowadays chatbots, which are rising a new wave of
interest [1, 23-25], are in the same stream.
   The second drawback, more relevant for this paper, is that current approaches use
conversations explicitly modelled as such; this means that taking up a different appli-
cation would require to re-do the modeling. Our approach, instead, is totally different:
there is a meta-conversation, basically fitting if not all, a quite large number of use

    9 https://console.bluemix.net/docs/services/assistant/dialog-overview.html#dialog-overview
cases and each new case requires only to provide a new set of data (mainly a content
DB, as described in this paper). The novelty brought about by the iCHAT approach
lies mainly in the separation between educational content in strict sense (learning
objects, videos, texts…) and conversation: the chatbot engages in a conversation with
the user leading it like a teacher would do, allowing the user to express all (and only)
the intents (“I’m frustrated, I’ve understood, this is not clear…”) that are relevant for
the bot to decide what to show next, following a smart strategy the instructional de-
signer has integrated in the content organization.


5      Conclusions and future work
   In this paper we have described a technology for designing and deploying data-
driven chatbots. Politecnico di Milano is developing iCHAT, leveraging on
WATSON technology, in cooperation with IBM-Research-Italy.
   We are striving for a generalized solution, that could be applied for various appli-
cations. The current chatbot fully supports two courses (MOOCs) developed by
Politecnico di Milano. Learning pathways, currently, are defined taking into account
the following criteria: what part of the course needs to be covered; the level of diffi-
culty of the wished items; the relevance of the items (as defined by the teacher). The
conversation is profiled according to two criteria: time available and learner’s back-
ground.
   As far as education is concerned, these are the most relevant aspects: conversa-
tional interfaces can help into making the learning processes more effective. Ef-
fectiveness should come (i) from the possibility of better enforcing adaptive learn-
ing, (ii) from the possibility of fostering longer learning sessions and also (iii) from
the increased “empathy” of the interaction, fostering a better mood for the learner. In
addition, conversational interaction allows to collect a number of crucial “non-
functional information” about the learner: how she feels, what she thinks, her cogni-
tive/emotional situation, … This may lead to a very interesting development of a new
generation of learning analytics, focusing not so much on the “mechanics” of learn-
ing (still available, in any case), but rather on the learner reaction to learning.
   As for as technology is concerned, the most important contribution of iCHAT is
the idea that instead of modelling and implementing conversations, chatbots should be
driven by data. Conversation tables, in fact, allow to control the turns of conversa-
tion by the chatbot: a) when to speak, when to use a given “category” of what is being
said; b) how to use it; c) the specific “message”; d) the specific wording, etc. Conver-
sation tables are used in order to drive a meta-conversation. Pathways are another
aspect of “data-driven” chatbots: they provide the “agenda” for the conversation.
   Another part of the effort, not discussed in this paper, is a set of tables used to un-
derstand what the user says (“intents”). In all cases our chatbot does not allow a gen-
eral conversation; only turns of conversations focused on the learning process (e.g. “I
like it”, “it was difficult”, “I’m tired”, …) are recognized. It is clear that only empiri-
cal validation with users for an extended period (at least 6 months) will allow to tune
the various tables, making the conversations fluid and effective from the user point of
view.
   Several technological improvements are planned, the most important being: (i)
improving the ability to understand what the user says; (ii) creation of a friendly au-
thoring environment for conversation tables; (iii) enhancing the transition engine, in
order to handle more complex topologies for pathways; (iv) expanding and refining
the meta-conversation model, etc.
   In terms of applications, IBM and Politecnico are considering to enlarge the scope
of iCHAT technology, considering a more general purpose idea: adaptive streams of
information items delivered via adaptive conversational interfaces. Applications
are being investigated for domains like eCulture, eTourism and eFood.


6      References

1.       Song, D., E.Y. Oh, and M. Rice. Interacting with a conversational agent system for
         educational purposes in online courses. in 2017 10th international conference on
         human system interactions (HSI). 2017. IEEE.
2.       Casola, S., et al. Designing and Delivering MOOCs that Fit all Sizes. in Society for
         Information Technology & Teacher Education International Conference. 2018.
         Association for the Advancement of Computing in Education (AACE).
3.       Akcora, D.E., et al. Conversational support for education. in International Conference
         on Artificial Intelligence in Education. 2018. Springer.
4.       Di Blas, N., Lodi, L., Paolini, P., Pernici, B., Raspa, N., Rooein, D., Renzi, F.,
         Sustainable Chatbots supporting Learning. ED-MEDIA 2019, in World Conference
         on Educational Multimedia, Hypermedia & Telecommunications,. 2019: Amsterdam.
5.       Garzotto, F. and M. Gelsomini, Magic Room: A Smart Space for Children with
         Neurodevelopmental Disorder. IEEE Pervasive Computing, 2018. 17(1): p. 38-48.
6.       Masche, J. and N.-T. Le. A review of technologies for conversational systems. in
         International Conference on Computer Science, Applied Mathematics and
         Applications. 2017. Springer.
7.       Bollweg, L., et al. When Robots Talk-Improving the Scalability of Practical
         Assignments in MOOCs Using Chatbots. in EdMedia+ Innovate Learning. 2018.
         Association for the Advancement of Computing in Education (AACE).
8.       Carbonell, J.R., AI in CAI: An artificial-intelligence approach to computer-assisted
         instruction. IEEE transactions on man-machine systems, 1970. 11(4): p. 190-202.
9.       Croxton, R.A., The role of interactivity in student satisfaction and persistence in
         online learning. Journal of Online Learning and Teaching, 2014. 10(2): p. 314.
10.      Person, N.K. and A.C. Graesser, Pedagogical agents and tutors. Encyclopedia of
         Education, Macmillan, New York, 2006: p. 1169-1172.
11.      Davis, R. and P. Antonenko, Effects of pedagogical agent gestures on social
         acceptance and learning: Virtual real relationships in an elementary foreign language
         classroom. Journal of Interactive Learning Research, 2017. 28(4): p. 459-480.
12.      Page, L.C. and H. Gehlbach, How an artificially intelligent virtual assistant helps
         students navigate the road to college. AERA Open, 2017. 3(4): p.
         2332858417749220.
13.   Chou, C.-Y., T.-W. Chan, and C.-J. Lin, Redefining the learning companion: the past,
      present, and future of educational agents. Computers & Education, 2003. 40(3): p.
      255-269.
14.   André, E., T. Rist, and J. Müller. WebPersona: a life-like presentation agent for
      educational applications on the world-wide web. in Proc. of Workshop" Intelligent
      Educational Systems on the World Wide Web" at AI-ED. 1997.
15.   Winkler, R. and M. Söllner, Unleashing the potential of chatbots in education: A
      state-of-the-art analysis. 2018.
16.   Jia, J., CSIEC: A computer assisted English learning chatbot based on textual
      knowledge and reasoning. Knowledge-Based Systems, 2009. 22(4): p. 249-255.
17.   Chou, C.-Y., B.-H. Huang, and C.-J. Lin, Complementary machine intelligence and
      human intelligence in virtual teaching assistant for tutoring program tracing.
      Computers & Education, 2011. 57(4): p. 2303-2312.
18.   Anderson, J., R. and REISER, B. The LISP tutor: p. 159-175.
19.   Economist, T. Mobile services: Bots, the next frontier. 2016, April 8; Available from:
      Retrieved from www.economist.com/news/business-and-finance/21696477-market-
      apps-maturing-now-one-text-based-services-or-chatbots-looks-poised
20.   Wiley, D., The instructional use of learning objects. Bloomington IN: Agency for
      Instructional Technology and the Association for Educational Communications and
      Technology. Online: http://reusability. org/read, 2002.
21.   Alavi, M. and D.E. Leidner, Research commentary: Technology-mediated learning—
      A call for greater depth and breadth of research. Information systems research, 2001.
      12(1): p. 1-10.
22.   Söllner, M., et al., Process is king: Evaluating the performance of technology-
      mediated learning in vocational software training. Journal of Information
      Technology, 2018. 33(3): p. 233-253.
23.   High, R., The era of cognitive systems: An inside look at IBM Watson and how it
      works. IBM Corporation, Redbooks, 2012.
24.   Hill, J., W.R. Ford, and I.G. Farreras, Real conversations with artificial intelligence:
      A comparison between human–human online conversations and human–chatbot
      conversations. Computers in Human Behavior, 2015. 49: p. 245-250.
25.   Kerlyl, A., P. Hall, and S. Bull. Bringing chatbots into education: Towards natural
      language negotiation of open learner models. in International Conference on
      Innovative Techniques and Applications of Artificial Intelligence. 2006. Springer.