=Paper= {{Paper |id=None |storemode=property |title=With a New Helper Comes New Tasks Mixed-Initiative Interaction for Robot-Assisted Shopping |pdfUrl=https://ceur-ws.org/Vol-693/paper4.pdf |volume=Vol-693 }} ==With a New Helper Comes New Tasks Mixed-Initiative Interaction for Robot-Assisted Shopping== https://ceur-ws.org/Vol-693/paper4.pdf
         With a New Helper Comes New Tasks
Mixed-Initiative Interaction for Robot-Assisted Shopping

                       Anders Green1 Helge Hüttenrauch1
                 Cristian Bogdan1 Kerstin Severinson Eklundh1
                  1
                      School of Computer Science and Communication
                            KTH Royal Institute of Technology
                                100 44 Stockholm, Sweden
                        {green, hehu, cristi, kse}@csc.kth.se



       Abstract. In the CommRob project1 we are investigating Robot As-
       sisted Shopping. We are considering the effects on usability when allow-
       ing for mixed-initiative dialogue. It is noted that when adding a robotic
       assistant to a scenario that was previously involving only one agent, two
       new tasks are created: collaborative interaction, and learning an inter-
       face. Evaluation of mixed-initiative dialogue becomes complicated be-
       cause it is not straightforward to separate the overall task performance
       from the attributes brought by mixed-initiative interaction.


1     Introduction
Some use scenarios for which natural language user interfaces are being devel-
oped, draw on situations where two or more people naturally engage in col-
laborative communication (e.g., asking for time-table information, scheduling
meetings etc). This is not the case in the CommRob project, where we are in-
vestigating how a robotic trolley can enhance shopping in a supermarket. The
normal shopping scenario does not (usually) involve two agents solving the task
using natural language. In our scenario the user enters a shopping list, e.g., by
selecting products on a touch-screen, or by using speech commands. The robot
then guides the user to the product locations allowing the user to scan the bar
code and put the product in the trolley. During such this scenario we want to
allow the initiative to shift back and forth between user and system based on
the what these agents consider is beneficial for the collaborative task of assisted
shopping.

1.1    Mixed-initiative interaction
Many approaches to robot interfaces assumes a fixed-initiative, or command-
based style of interaction, based on a controlled language centered around action
verbs that are directly translated to physical robot actions, e.g., “go forward” [1],
“wave” [2]. Such approaches rarely involve advanced dialogue management, and
1
    www.commrob.eu
usually rely on more or less direct translation of natural language expressions
to system movement primitives. As mixed-initiative dialogue can be understood
as a possible complex way of solving a joint task [3], approaches for handling
mixed initiative dialogue have involved the extensive use of high-level dialogue
models involving planning [4]. Mixed-initiative interaction is then carried out as
a dynamic problem-solving activity, where agents negotiate their roles and adapt
their interaction style dynamically to address the problem at hand [5].
    Creating a strategy for handling mixed initiative relies on information and
actions from other types components. Selection of a strategy for handling mixed-
initiative is a challenge, and we will only briefly list some of the components
necessary to consider as suggested by [6]:

 – A natural language dialogue model: the system needs to handle natural lan-
   guage dialogue, manage turn-taking, and engage in sub-dialogues, only to
   mention a few things that are relevant for mixed-initiative.
 – A domain model defining the task, agent roles and obligations.
 – A model of user attention to be able to decide when the user is possible to
   interrupt.
 – A strategy for managing initiative: based on the current status of the dia-
   logue, the task, and the users’ attentional state, the system should decide
   whether to take action to challenge the initiative?



      Single shopping                         Collaborative shopping

          User                        User                 +   Robot

          Domain-              Domain-        System                 Mixed-            Natural
        understanding        understanding understanding       initiative strategy    Language
                                                                                      capability

                                                                User model           Domain-model




         Fig. 1. The complexity brought by introducing a robotic helper.




1.2   Roles for a new helper
Given the research in the field of social robotics [7] it is perhaps uncontrover-
sial to assume that users and robotic agents form some kind of social relation
when engaging in interaction. One way of understanding this social relation is to
think of robots and humans in terms of roles. Thus the robot may be considered
a helper, a facilitator, an information provider or even a sales-agent acting on
behalf of the store owner. These roles can be performed through the employment
of social communicative behaviour involving various interaction styles based on
domain knowledge and task capabilities. By defining social obligations [8] that
comes with a role, and associate these with communicative goals is one approach
to model initiative. For instance an initiative model should consider when is it
alright for a robot or a computer to interrupt someone, given the social obliga-
tions of the role the robot plays in a situation [9]. Other social obligations may
be inherited from the definition of the role. Being a helper could by definition
entail that the robot should not ignore products on the shopping list or go to
a product by another brand. Designing a initiative strategy therefore involves
deciding what behaviours to engage dependent on what is believed to be an
appropriate action that contributes to the joint task, taking the goals of the
involved agent roles into account. In the shopping scenario this may involve the
following conditions and resulting actions:
 – Robot passes a product that is in the shopping list on the route to another
   product → Indicate products to user.
 – Planned route changes based on information gathered (including communi-
   cation with other robots) → offer another route.
 – When passage is too narrow, or way is blocked → suggest a manual override
   of steering (our robot prototype has a haptic steering device).
 – User given preferences, e.g. concerning allergies, special diets → suggest al-
   ternative or suited products.
 – In case of severe problems → suggest to call staff or call staff
Since the communicative framework [10] we are investigating in the CommRob
project provides a general approach to representing interactive systems, initiative
can be taken by the robot using several modalities, including GUI, speech output,
robotic full-body gestures [10, 11].

1.3   Evaluation of robotic helpers
Until now the shopping scenario has been described as replacing one old and
known task with another. But we might also understand robot assisted shopping
two new, but related, tasks: collaborative, assisted shopping and learning the use
of a multimodal user interface. In our view we need to this into consideration and
evaluate the robot system along several dimensions such as overall usability, user
experience, but also specific components such as the mixed-initiative strategy.
Two approaches to evaluation of mixed-initiative dialogue have been proposed by
Guinn [12]: performance measures of how the system model fits descriptive data,
i.e., a corpus of mixed-initiative interaction; analysis of the dialogues resulting
from the initiative model to establish if they have the desired qualities.
     As for the first approach, we have already established that we cannot simply
collect data on existing collaborative shopping to test our system against. One
common approach for collecting data on interaction with a future system is
to build a prototype using the Wizard-of-Oz technique [13, 14]. The prototyping
involved in the Wizard-of-Oz method provides the means to try-out and evaluate
mixed initiative strategies early on in the development process.
    The second approach involves finding what the desirable qualities of a mixed-
initiative dialogue for a robot is. This raises several challenges. First of all the
robot needs to have some initial natural language capability that can support the
mixed-initiative strategy, e.g., performance of speech recognizer may account for
the overall performance of the system [12]. One approach to qualitative evalua-
tion of dialogues is to use synthetic dialogues [14] whereby a designer constructs
dialogues the system is intended to handle. This approach allows for evaluation
towards what we may call a synthetic corpus. Doing this for a multimodal system
is a challenging task, and should therefore be complemented with other methods
for eliciting use data, such as Wizard-of-Oz.
    From an experiential point of view we should also take into account that robot
systems are physically embodied and that users may form (limited) social rela-
tions to them. Meaning that we need to evaluate to what extent the movements
and anthropomorphic qualities of the robot affect the way the mixed-initiative
dialogue is experienced by users.
    Another approach to qualitative evaluation of mixed-initiative strategies of
robotic helpers is to use heuristic evaluation [15] using design guidelines [16]
or maxims for communications [17]. Such guidelines should also be considered
during the interaction design phase.


2   Conclusions
In this position paper we argue that introducing a robot into a scenario that was
previously managed by a single user, introduces new tasks: collaborative mixed-
initiative interaction with a robotic agent; and learning to handle a new interface.
Evaluating this involves establishing evaluation of several components based
on comparison with corpus data collected using hi-fi simulation or with early
prototypes; or qualitative evaluation based on heuristics or synthetic dialogues.


References
 1. Zelek, J.S.: Human-Robot Interaction with inimal Spanning Natural Language
    Template for Autonomous and Tele-Operated Control. Proceedings of the 1997
    IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS ’97)
    1 (1997) 299–305 vol.1
 2. Oka, T., Abe, T., Sugita, K., Yokota, M.: Runa: a multimodal command language
    for home robot users. Artificial Life and Robotics 13(2) (03 2009/03/01/) 455–459
 3. Horvitz, E.: Uncertainty, action, and interaction: in pursuit of mixed-initiative
    computing. IEEE Intelligent Systems 14(5) (1999) 17–20
 4. Clodic, A., Alami, R., Montreuil, V., Li, S., Wrede, B., Swadzba, A.: A study
    of interaction between dialog and decision for human-robot collaborative task
    achievement. In: Proceedings of IEEE 16th International Symposium on Robot
    and Human Interactive Communication (RO-MAN 2007), Jeju, Korea (August
    26-29. 2007)
 5. Allen, J.F.: Mixed-initiative interaction. IEEE Intelligent Systems 14(5) (1999)
    14–16
 6. ACM: Proceedings and presentations of the workshop on mixed-initiative intelli-
    gent systems at icjai-2003 (August 9 2003)
 7. Fong, T., Nourbakhsh, I., Dautenhahn, K.: A survey of socially interactive robots.
    Robotics and Autonomous Systems 42(3-4) (2003) 143–166
 8. Traum, D.R., Allen, J.F.: Discourse Obligations in Dialogue Processing. In Puste-
    jovsky, J., ed.: Proceedings of the Thirty-Second Meeting of the Association for
    Computational Linguistics, San Francisco (1994) 1–8
 9. Hüttenrauch, H., Severinson Eklundh, K.: To Help or Not to Help a Service Robot.
    In: Proceedings of the 12th IEEE International Workshop on Robot and Human
    Interactive Communication RO-MAN’2003, Millbrae CA, USA, IEEE (2003)
10. Kaindl, H., Falb, J., Bogdan, C.: Multimodal Communication Involving Movements
    of a Robot. In: CHI ’08 extended abstracts on Human factors in computing systems,
    New York, NY, USA, ACM (2008) 3213–3218
11. Green, A., Hüttenrauch, H.: Making a Case for Spatial Prompting in Human-Robot
    Communication. In: Multimodal Corpora: From Multimodal Behaviour theories
    to usable models, workshop at the Fifth international conference on Language
    Resources and Evaluation, LREC2006, Genova, Italy (May 22-27 2006)
12. Guinn, C.I.: Evaluating mixed-initiative dialog. IEEE Intelligent Systems 14(5)
    (1999) 21–24
13. Dahlbäck, N., Jönsson, A., Ahrenberg, L.: Wizard of Oz studies - why and how.
    Knowledge-Based Systems 6(4) (1993) 258–256
14. Green, A.: Designing and Evaluating Human-Robot Communication: Informing
    Design through Analysis of User Interaction. Phd thesis, Royal Institute of Tech-
    nology, Stockholm, Sweden (February 2009)
15. Nielsen, J.: Heuristic Evaluation. In Nielsen, J., Mack, R., eds.: Usability Inspection
    Methods. John Wiley & Sons, New York, NY (1994)
16. Drury, J.L., Hestand, D., Yanco, H.A., Scholtz, J.: Design guidelines for improved
    human-robot interaction. In: CHI ’04: CHI ’04 extended abstracts on Human
    factors in computing systems, New York, NY, USA, ACM Press (2004) 1540–1540
17. Grice, J.P.: Logic and conversation. In Cole, P., Morgan, J.L., eds.: Syntax and
    Semantics. Volume 3: Speech Acts. Academic Press, New York, NY (1975) 41–58