<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Using Contextual Knowledge to Resume Human-Agent Conversations when Programming the Intelligence of Smart Environments</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Asterios Leonidis</string-name>
          <email>leonidis@ics.forth.gr</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Margherita Antona</string-name>
          <email>antona@ics.forth.gr</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Constantine Stephanidis</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Foundation for Research and Technology - Hellas (FORTH) - Institute of Computer Science (ICS)</institution>
          ,
          <country country="GR">GREECE</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>University of Crete, Department of Computer Science</institution>
          ,
          <country country="GR">GREECE</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>This paper presents a hybrid technical solution towards addressing conversational interruptions when interacting (via a typed interface) with a virtual agent to program the intelligence of a smart environment. The AmI Solertis system is a ubiquitous programming environment that facilitates the definition of the behavior of a Smart Environment. To address the issue above, Ami Solertis introduces a mechanism that stores any unexpectedly interrupted conversations in a stack along with relevant contextual information. This context-sensitive information attached to the dialog, is used to re-establish a detailed context in the user's working memory when resuming human-agent conversations, within.</p>
      </abstract>
      <kwd-group>
        <kwd>Ambient Intelligence</kwd>
        <kwd>Conversational Agent</kwd>
        <kwd>Smart Environment Programming</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        A shift is currently perceived from the “one person with one computer” paradigm,
which is based on explicit human computer interaction, towards a ubiquitous and
pervasive computing landscape, where implicit interaction and continuous cooperation is
becoming the norm of computer supported activities. This modern way of living, where
technology and information are flowing around the physical environment, led to the
emergence of the Ambient Technology paradigm [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], where physical objects are
enhanced with computer technology to communicate, share information and collaborate
with other technological devices in an intelligent fashion. AmI is a prominent
dimension in ICT, while industrial stakeholders have already acknowledged its benefits and
opportunities and introduce to the mass market digital devices and services that will
transform traditional environments to technologically enhanced “intelligent” ones [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ].
      </p>
      <p>
        In order to maximize the efficiency, extensibility and adaptation to the needs of their
users, AmI systems need to be programmable. In fact, their programmers are not
expected to be only professional developers, but also inexperienced end-users who can
either modify the behavior of the system based on their current needs or extend its
intelligence even further to address future necessities. The latter, in combination with the
fact that programing such environments is inherently difficult due to their high
architectural and computational complexity, further complicates the overall process. To that
end, the Ambient Intelligence Research Programmed of ICS-FORTH has developed
AmI Solertis [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ], a studio for Ambient Intelligence applications development, which
empowers users to create behaviors scenarios by reviewing and modifying the
highlevel “business logic” of a smart environment in a user-friendly manner through both a
visual programming platform and an accompanying chat-bot agent. This paper aims to
demonstrate the technical solution applied by the AmI Solertis system in order to handle
interruptions that occur while a human actor is engaged in a conversation with a
chatbot agent trying to define the parameters and deploy a new script that dictates the
intelligent behavior of the Smart Environment. AmI Solertis resides on contextual
knowledge to smoothly resume the conversation by both adapting the virtual agent’s
behavior and bridging the human counterpart’s mental gap due to the context-switch.
2
      </p>
    </sec>
    <sec id="sec-2">
      <title>Related Work</title>
      <p>
        The emerging paradigm of end-user programming along with the rapid development of
the Internet-of-Things and of Smart Environments strongly suggest that in the near
future the end-users will have to be able to modify the behavior of the software artefacts
they possess [
        <xref ref-type="bibr" rid="ref12 ref7">7, 12</xref>
        ]. To that end, various alternatives have been proposed, with visual
programming paradigm being the prominent choice since it facilitates inexperienced
users to quickly learn how to build simple programs [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. Nevertheless, the rise of
chatbots [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ] and the fusion of conversational interfaces into Smart Environments enables
the provision of more intuitive interaction paradigms (i.e., natural language dialogues)
between the user and the intelligent virtual agents (i.e., technological artefacts).
      </p>
      <p>
        Apart from their undeniable benefits though, various challenges surface including
the necessity to handle the interruptions that might occur. A great amount of research
[
        <xref ref-type="bibr" rid="ref14">14</xref>
        ] has been conducted on how to handle interruptions that occur during human-agent
conversations and mainly originate from the human participant. However, in the
context of this work, the term conversational interruption refers to the unexpected
termination of the user’s interaction with one of the intelligent agents’ due to external stimuli.
The employed handling strategy is analogous to the approach for addressing the
problem of task switching from the Human-Computer Interaction (HCI) perspective (e.g.,
breadcrumbs) [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] and the established guidelines on how to design mobile experiences
for partial attention and interruption (e.g., data segmentation, glanceability) [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ].
      </p>
      <p>
        The key aspect is the provision of relevant visual information (i.e., context) to the
user to assist his wayfinding. In the case of AmI Solertis where conversation dialogues
are used to build structured programs (i.e., behavior scripts), the availability of relevant
visual information is rather limited. Therefore, this work synthesizes a hybrid approach;
on the one hand, it introduces custom conversational models relying on utterances
modelling and annotation to understand natural language [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ] and on hierarchical dialog
models to implement the conversational frameworks [
        <xref ref-type="bibr" rid="ref18 ref2">2, 18</xref>
        ] that structure and assess
the progress of the expected interaction, and on the other hand it attaches appropriate
contextual knowledge to act as mental cues that facilitate conversation resume.
      </p>
    </sec>
    <sec id="sec-3">
      <title>Modelling Framework</title>
      <p>
        This work has been applied in the context of AmI Solertis system, which facilitates
management of the available technological artifacts of a Smart Environment by its
occupants. For that to be achieved, AmI Solertis offers, amongst others, a virtual agent in
the form of a chat-bot, named ADAM (Ambient and Distributed Agents’ Manager),
that can communicate with the end-users via natural language textual interface in order
to help them accomplish numerous orchestration-related tasks (i.e., inquiries about
services’ status, definition of new behaviors, validation of existing ones, etc.). As
expected, within such environments interruptions and task-switching are quite frequent,
therefore ADAM incorporates functionality that enables recovery from such situations
either by resuming them immediately in case the interrupt ends soon, or by providing
relevant contextual information to assist the user recall the initial objective at a later
time. A collection of meta-models that store domain-specific conversation information
has been designed and used by ADAM’s Conversational Interruption Handler (CIH).
Modelling Conversations. As aforementioned, in the context of this work, certain
types of conversational dialogs exist that are directly mapped to the available back-end
facilities of the AmI Solertis framework, namely: (i) Activation and Deactivation
Requests, (ii) Exploration Inquiries, (iii) Monitoring Inquiries, (iv) Recommendation
Inquiries, (v) Creation and Modification Commands and (v) Help and Training Dialogs.
Given that ADAM supports certain user tasks (e.g., define a new behavior), every
dialog corresponds to a micro finite-state machine (FSM) [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ] and populates the respective
data structures as the dialog with the user progresses and the FSM transits from one
state to the next. An illustrative example of such a state machine is presented in Fig. 1;
the model of “Create a new behavior” is decomposed into its inner states and the three
alternative dialogues are provided to demonstrate ADAM’s in-order (dialogue 3) and
out-of-order (dialogues 1 and 2) data collection.
      </p>
      <p>
        In addition to the FSM-specific meta-model that stores the behavior related parameters,
the ADAM’s Dialog Manager (DM), which orchestrates the overall process, stores any
active conversations with their closure [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. In particular, it persists the entire
environment of a conversation including: (i) its complete stack of utterances of the current
session, (ii) the identified user intents (i.e., what is the task that the user aims to
accomplish via the current dialog), and (iii) any entities that have been successfully recognized
and will be forwarded to the services of the AmI Solertis framework to execute the
actual task (e.g., instantiate and deploy a new script that encodes the desired behavior).
Modelling Contextual Knowledge and Interruptions. Context of Use is the
cornerstone of Smart Environments as it constitutes the glue that brings together all the
isolated services under a common roof and enables the environment’s proactive response
to user needs by permitting intelligence sharing across the various services (including
virtual agents). In such environments, various sensors and applications collect and
process huge amount of information to distil and distribute useful insights in the form of
small information chunks (e.g., users’ location, activities at hand, state of interactive
applications, etc.) that can be persisted and used by other agents. Such abundant
contextual information is the key towards semantically annotating interruptions, as it
contains background data that will be used to provide mental cues to the user and assist
conversation continuation.
      </p>
      <p>As an example, consider the following scenario: Mary, while watching TV, notices the
trailer of a new TV show that is about to air. The show seems interesting, so she decides
to ask ADAM to program its recording at the relevant time. At that moment, her friend
Anna has decided to pay her a visit and is standing at the front door. As expected, the
latter event is of higher priority, therefore interaction with ADAM gets aborted. The
dialog has not finished yet, however contextual information has been recorded (Fig. 2);
in particular, the Dialog Manager has stored that: (i) Mary has started telling ADAM to
create a new behavior, (ii) she was watching TV, (iii) the TV was on channel X, (iv)
the program airing at that moment was Show-Z and (v) the trailer is about Show-W. In
the future, when Mary is available, ADAM can make a suggestion to resume their
conversation and, if necessary, assist her by recalling what she was about to ask by
retrieving and presenting that information.</p>
      <p>
        Apparently, the amount of information available at any given moment in a fully
connected smart environment is quite large; nevertheless, appropriate filtering techniques
are used to determine which contextual information is useful to be attached to the
interruption event (e.g., presence of another user, activities/applications capturing the
user focus, etc.). The AmI Solertis framework handles information storage using its
internal logging mechanisms and exposes to ADAM’s Conversational Interruptions
Handler only their MIME-types [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] and their reference points that facilitate their
retrieval. Since ADAM’s behavior is encoded using the AmI Solertis facilities, it can be
easily extended via plugins (i.e., behavior scripts) to introduce new strategies that rely
on contextual information that were not originally available; e.g., in the context of the
aforementioned scenario, if a TV channel can be queried to preview its content at that
time through a 10-seconds video (including any commercials shown), then ADAM
could orchestrate its presentation to further assist Mary.
      </p>
    </sec>
    <sec id="sec-4">
      <title>Resuming conversations: the AmI Solertis approach</title>
      <p>
        Modelling by itself can be easily used by ADAM’s submodules to handle a single
conversation that has been interrupted; but interruptions happen all the time in our daily
environments and in many cases, they happen concurrently. The objective of employing
ambient intelligence technologies is not just to limit unwanted interruptions, but mainly
to minimize their handling and recovery time [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] benefitting from contextual
information and collective intelligence.
4.1
      </p>
      <p>
        Stack-based interruption handling
The approach followed by the AmI Solertis framework regarding conversational
interruption handling is inspired by the methodology followed by almost all compilers, to
manage their run-time memory as a stack [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]; whenever a procedure is called, space for
its local variables is pushed onto a stack, and when the procedure terminates, that space
is popped off the stack. In more details, whenever a new conversation starts, ADAM’s
Dialog Manager creates a new activation record about that dialog, where the relevant
references to the appropriate models are stored (Fig. 3). In case of an interrupt, the
current dialog’s contextual information is persisted, the overall dialog is marked as
incomplete, and it gets pushed to the stack of dialogs for future reference. Additionally,
while the user is occupied handling the unexpected event, ADAM’s Conversational
Interruptions Handler attempt to evaluate the importance and the priority of the latest
conversation and estimate the duration of the interruption, in order to determine
whether an intervention for resume should be scheduled as soon as the user is available.
For instance, if Mary has started saying “Tonight, schedule lights to…” or “When my
child approaches…” and the interrupt originated from an expected event which is not
supposed to take long (e.g., a courier delivering a package), then ADAM will mark that
last conversation as critical and will prompt Mary to resume at the earliest opportunity.
In different cases, other appropriate closure strategies will be applied.
Proactive Informative Interruptions. A conversation model, in the context of this
work, refers to a single or a series of commands that will be submitted to the AmI
Solertis framework for execution. Every command specifies a set of mandatory fields
(i.e., intents, entities) that must be populated using information extracted from the
user’s input. Therefore, if ADAM identifies that those values are not properly set before
submission, it can proactively interrupt the conversational, by altering its natural flow,
in order to get additional clarifications (e.g., user-specific jargon has been identified in
a mandatory filed instead of the actual service’s name).
      </p>
      <p>Interruption Avoidance or Fast-track. ADAM, apart from artificially injecting
interruptions in active conversations, aims to predict: (a) whether interruptions are about to
happen and (b) how much time will their handling require, in order to determine how
to avoid them. In case of a potential conflict, ADAM either suggests to postpone the
currently active conversation; e.g., if the user is about to get a notification saying that
the cooking process needs his attention -a task that is expected to take 2 minutes to be
completed-, whereas rule creation process takes 5 minutes, then ADAM will ask for
user’s permission to schedule it for another time, or fast-track interruptions (e.g., asks
the user to check the pot now rather than waiting for two minutes first ) to ensure that
the conversation will not stop from that point forward.</p>
      <p>
        Conversation Abandonment. ADAM periodically monitors the context of use in order
to determine which pending conversations need to be discarded. To that end, ADAM
examines the participating services, triggers and actions of every partial activation
record and evaluates whether an equivalent behavior has been already installed in AmI
Solertis from any other configuration channel (e.g., manually via the supported
graphical editor [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ], automatically by the AmI Solertis self-assessment mechanism, etc.).
Conversation Continuation. Most importantly though, ADAM’s Interruption Handler
aims to successfully complete any interrupted conversation. To that end, it monitors
interaction and if a similar dialog begins or an analogous context is detected, it suggests
to resume a relevant pending conversation. Priority is given to conversations that were
recently interrupted (e.g., a few seconds ago), then to incomplete conversations that can
be finished quickly (e.g., only the confirmation step is missing) and finally, to
conversations that were classified as important or time-critical by the relevant AmI Solertis
facilities (e.g., involve sensitive family members, the user implied that the relevant
behavior should start today, etc.). On resume, if the conversation was conducted some
time ago, ADAM provides a short summary to the user (e.g., “You were saying about
Safety Automation”) to improve recollection. If necessary, ADAM can re-estate a more
detailed context of use in user’s working memory by restoring the information attached
to that dialog; e.g., “You were watching a commercial about Show-Z on Channel X,
when you asked me to create a new rule to… Maybe you wanted to set a recurring
notification to remind you about that show or record it if you are unavailable?”, or “You
were saying that when the night falls, for you family’s safety, you want to lock…”, etc.
5
      </p>
    </sec>
    <sec id="sec-5">
      <title>Conclusions and Future Work</title>
      <p>This paper presented a hybrid approach towards addressing conversational
interruptions when interacting with a virtual agent to program the intelligence of a smart
environment. The AmI Solertis approach integrates existing HCI strategies to address task
switching with modelling of conversational dialogs and introduces the notion of
runtime interruption handling as a stack, using contextual knowledge to resume
humanagent conversations. The added value of employing ambient intelligence technologies
is not just to limit unwanted interruptions, but mainly to minimize their handling and
recovery time, by re-establishing a more detailed context of use in the user’s working
memory using the information attached to the dialog at hand.</p>
      <p>To investigate the performance of this approach in its actual target environment and
whether it fulfils its goals, a user-based study is planned to take place in the near future
in a smart home setup located in the Ambient Facility building at the premises of the
Institute of Computer Science (ICS) of the Foundation for Research and Technology
(FORTH) in Heraklion, Crete.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Ahom</surname>
            ,
            <given-names>A.V.</given-names>
          </string-name>
          et al.:
          <string-name>
            <surname>Compilers</surname>
          </string-name>
          , Principles, Techniques. Addison Wesley Boston (
          <year>1986</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Bellegarda</surname>
            ,
            <given-names>J.R.</given-names>
          </string-name>
          :
          <article-title>Spoken Language Understanding for Natural Interaction: The Siri Experience</article-title>
          . In: Natural Interaction with Robots,
          <source>Knowbots and Smartphones</source>
          . pp.
          <fpage>3</fpage>
          -
          <issue>14</issue>
          Springer, New York, NY (
          <year>2014</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Borenstein</surname>
            ,
            <given-names>N.S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Freed</surname>
          </string-name>
          , N.:
          <article-title>Multipurpose internet mail extensions (MIME) part two: Media types</article-title>
          . (
          <year>1996</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Cook</surname>
            ,
            <given-names>D.J.</given-names>
          </string-name>
          et al.:
          <article-title>Ambient intelligence: Technologies, applications, and opportunities</article-title>
          .
          <source>Pervasive Mob. Comput. 5</source>
          ,
          <issue>4</issue>
          ,
          <fpage>277</fpage>
          -
          <lpage>298</lpage>
          (
          <year>2009</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Green</surname>
            ,
            <given-names>T.R.G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Petre</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <article-title>Usability Analysis of Visual Programming Environments: A “Cognitive Dimensions” Framework</article-title>
          .
          <source>J. Vis. Lang. Comput. 7</source>
          ,
          <issue>2</issue>
          ,
          <fpage>131</fpage>
          -
          <lpage>174</lpage>
          (
          <year>1996</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Hinman</surname>
            ,
            <given-names>R.:</given-names>
          </string-name>
          <article-title>The mobile frontier.</article-title>
          <string-name>
            <surname>O'Reilly Media</surname>
          </string-name>
          , Inc. (
          <year>2012</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Holloway</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Julien</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          :
          <article-title>The Case for End-user Programming of Ubiquitous Computing Environments</article-title>
          .
          <source>In: Proceedings of the FSE/SDP Workshop on Future of Software Engineering Research</source>
          . pp.
          <fpage>167</fpage>
          -
          <lpage>172</lpage>
          ACM, New York, NY, USA (
          <year>2010</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Kanner</surname>
            ,
            <given-names>A.D.</given-names>
          </string-name>
          et al.:
          <article-title>Comparison of two modes of stress measurement: Daily hassles and uplifts versus major life events</article-title>
          .
          <source>J. Behav. Med</source>
          .
          <volume>4</volume>
          ,
          <issue>1</issue>
          ,
          <fpage>1</fpage>
          -
          <lpage>39</lpage>
          (
          <year>1981</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Krug</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          :
          <article-title>Don't make me think!: a common sense approach to Web usability</article-title>
          .
          <source>Pearson Education India</source>
          (
          <year>2000</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Landin</surname>
            ,
            <given-names>P.J.:</given-names>
          </string-name>
          <article-title>The mechanical evaluation of expressions</article-title>
          .
          <source>Comput. J. 6</source>
          ,
          <issue>4</issue>
          ,
          <fpage>308</fpage>
          -
          <lpage>320</lpage>
          (
          <year>1964</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Leonidis</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          et al.:
          <article-title>Enabling Programmability of Smart Learning Environments by Teachers</article-title>
          . In: Distributed, Ambient, and Pervasive Interactions. pp.
          <fpage>62</fpage>
          -
          <lpage>73</lpage>
          Springer, Cham (
          <year>2015</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Lieberman</surname>
          </string-name>
          , H. et al.:
          <article-title>End-User Development: An Emerging Paradigm</article-title>
          . In: Lieberman,
          <string-name>
            <surname>H.</surname>
          </string-name>
          et al. (eds.)
          <article-title>End User Development</article-title>
          . pp.
          <fpage>1</fpage>
          -8 Springer Netherlands (
          <year>2006</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <article-title>McKinsey and Company: There's No Place Like [ A CONNECTED ] Home</article-title>
          , http://www.mckinsey.com/connectedhome/.
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Nakano</surname>
          </string-name>
          ,
          <string-name>
            <surname>Mikio</surname>
          </string-name>
          , et al.
          <article-title>"A two-layer model for behavior and dialogue planning in conversational service robots</article-title>
          .
          <source>" Intelligent Robots and Systems</source>
          ,
          <year>2005</year>
          .(IROS
          <year>2005</year>
          ).
          <source>2005 IEEE/RSJ International Conference on. IEEE</source>
          ,
          <year>2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>Shawar</surname>
            ,
            <given-names>B.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Atwell</surname>
          </string-name>
          , E.:
          <article-title>Chatbots: are they really useful? In: LDV Forum</article-title>
          . pp.
          <fpage>29</fpage>
          -
          <lpage>49</lpage>
          (
          <year>2007</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <surname>Sklyarov</surname>
          </string-name>
          , V.:
          <article-title>Hierarchical finite-state machines and their use for digital control</article-title>
          .
          <source>IEEE Trans. Very Large Scale Integr. VLSI Syst</source>
          .
          <volume>7</volume>
          ,
          <issue>2</issue>
          ,
          <fpage>222</fpage>
          -
          <lpage>228</lpage>
          (
          <year>1999</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17.
          <string-name>
            <surname>Stolcke</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          et al.:
          <article-title>Dialogue Act Modeling for Automatic Tagging and Recognition of Conversational Speech</article-title>
          .
          <source>Comput. Linguist</source>
          .
          <volume>26</volume>
          ,
          <issue>3</issue>
          ,
          <fpage>339</fpage>
          -
          <lpage>373</lpage>
          (
          <year>2000</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          18.
          <string-name>
            <surname>Williams</surname>
            ,
            <given-names>J.D.</given-names>
          </string-name>
          et al.:
          <article-title>Fast and easy language understanding for dialog systems with Microsoft Language Understanding Intelligent Service (LUIS)</article-title>
          .
          <source>In: 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue</source>
          . p.
          <volume>159</volume>
          (
          <year>2015</year>
          ).
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>