=Paper= {{Paper |id=Vol-2352/paper6 |storemode=property |title=Structured Task Execution during Human-Robot Co-manipulation |pdfUrl=https://ceur-ws.org/Vol-2352/short6.pdf |volume=Vol-2352 |authors=Jonathan Cacace,Riccardo Caccavale,Alberto Finzi,Vincenzo Lippiello |dblpUrl=https://dblp.org/rec/conf/aiia/CacaceCFL18 }} ==Structured Task Execution during Human-Robot Co-manipulation== https://ceur-ws.org/Vol-2352/short6.pdf
     Structured Task Execution during Human-Robot
                    Co-manipulation

     Jonathan Cacace, Riccardo Caccavale, Alberto Finzi, and Vincenzo Lippiello

                  Università degli Studi di Napoli Federico II
{jonathan.cacace,riccardo.caccavale,alberto.finzi,lippiello}@unina.it



       Abstract. We consider a scenario in which a human operator physically interacts
       with a lightweight robotic manipulator to accomplish structured co-manipulation
       tasks. We assume that these tasks are interactively executed by combining the
       plan guidance and the human physical guidance. In this context, the human guid-
       ance is continuously monitored and interpreted by the robotic system to infer
       whether the human intentions are aligned or not with respect to the planned ac-
       tivities. This way, the robotic system can adapt the execution of the tasks accord-
       ing to the human intention. In this paper we present an overview of the overall
       framework and discuss some initial results.


Introduction

Safe and compliant physical human-robot interaction is a crucial issue in service robotics
applications [7, 8], while cooperative robotic platforms are spreading, the interactive
execution of complex structured tasks remains a challenging research topic [12]. This
issue is particularly relevant in industrial service robotics scenarios, where the tasks
are usually explicitly formalized [22], but they have to be flexibly adapted to the co-
workers activities to ensure a safe and natural collaboration during task execution. In
contrast with alternative approaches to collaborative task/plan execution [14, 19, 6, 4,
3, 21, 20], in this paper, we focus on physical human-robot interaction during the exe-
cution of complex co-manipulation tasks and propose an approach which is based on
a continuous interpretation of the human physical guidance [9, 10]. Notice that while
multiple modalities are usually inovolved during human-robot collaboration [11, 18], in
this work we deliberately focus on physical interaction only. Specifically, in the pro-
posed framework, the operator physical interventions are countinously assessed with
respect to the planned activities and motions to estimate the human aims and targets.
Intention recognition methods typically consider external forces excreted by the human
on the robot side to regulate the low level control of the robot [16, 17, 15], in contrast,
in our framework the assessed human intentions are exploited to suitably adapt the
robotic collaborative behavior at different levels of abstraction (trajectory, target, task).
Not only they are used to adapt the robot role (from active to passive and vice versa)
and compliance during the co-manipulation, but also to modify the execution of a co-
operative task depending on the human interventions. When the human intentions are
aligned with respect to the planned activities, these are maintained and the robotic ma-
nipulator can autonomously execute the task or proactively guide the user towards the
                  Fig. 1. The overall human-robot interaction architecture.


estimated targets. Otherwise, the robotic system can select alternative methods for task
execution, change targets or adjust the trajectory, while regulating the robot compliance
to follow or lead the human. The overall system has been demonstrated considering a
human operator interacts with a Kuka LBR iiwa arm to perform a simple assembly task.


System Architecture

Figure 1 illustrates the overall architecture. The High-Level Control System manages
task planning/replanning, monitoring and execution. The Human Operator can guide
the task execution by physically interacting with the manipulator. The robotic system
can estimate the forces Ft provided by the operator on the end-effector, while the opera-
tor perceives the associated force feedback Fext . The operator’s forces are continuously
monitored during physical human-robot interaction to interpret the human guidance in
the context of to the current plan. Specifically, the High-Level Control System interacts
with the Operator Intention Estimation in order to define targets which are coherent
with both the human guidance and the planned activities. The selected target points Xt
and an associated control model b are provided to the Adaptive Shared Controller to
suitably generate the motion trajectory Xc which is compliant with the human guid-
ance. Finally, the outcome of the Adaptive Shared Controller is exploited by the Posi-
tion Controller that can directly generate positions and orientations for the manipulator
end-effector, delegating inner control loops to solve the associated inverse kinematics.

High Level Control System. The High Level Control System integrates plan genera-
tion, plan monitoring, and execution (see Figure 2). The proposed framework is based
on an Executive System capable of continuously monitoring and orchestrating multiple
hierarchical tasks. It can exploit a hierarchical Task Planner for plan generation and
replanning, while a Target Selector is introduced to interpret the human guidance with
respect to the current tasks providing targets and control modes for the Adaptive Shared
Control. The proposed executive framework is inspired by the one proposed by [3, 4].
It is based on a control cycle that involves an internal structure, called Working Memory
(WM) and a plan library, called Long Term Memory (LTM). The LTM is a repository
                                  Task Planner
                                                                                                          Alive
                                                                                                          Alive
                       Executive System
     LTM                                                                              TRUE                                   TRUE
      method(task1,[task2...]).
                                    WM                         alive
                                                               alive                  plan1
                                                                                      plan1                                  plan2
                                                                                                                             plan2
      method(task2,[task3...]).
      operator(task3,[...]).
      operator(task4,[...]).
                                                      task1
                                                       task1
      operator(task5,[...]).
      ...                                                                                      sugar.added                            sugar.added
                                                                             white-sugar                            brown-sugar       take(coffee)
                                                                                               take(coffee)
                                                                                               take(coffee)                           take(coffee)
                                             task2
                                              task2                        add-sugar(coffee)
                                                                           add-sugar(coffee)                      add-sugar(coffee)
                                                                                               coffee.taken                           coffee.taken
                                                                             sugar.added                            sugar.added
           alive                    task3
                                     task3        task4
                                                   task4          task5
                                                                   task5
                                                                                    TRUE                                   TRUE
                                                                            add-white-sugar(coffee)
                                                                            add-white-sugar(coffee)                add-brown-sugar(coffee)
                                                                                                                   add-brown-sugar(coffee)
                            Target Selector                                      sugar.added                            sugar.added



Fig. 2. (Left) The High Level Control System is based on an Executive System which interacts with
a Task Planner to generate, expand and instantiate hierarchical tasks. The Long Term Memory
(LTM) is a task repository (hierarchical task definition). The Working Memory (WM) keeps track
of the tasks under execution. (Right) Plans in WM (light and dark gray ovals are for complex
and primitive tasks, green and blue boxes are preconditions and postconditions provided by the
complex/primitive operations).


that contains a declarative representations of the tasks and the actions the robotic system
is able to execute. Each task can be allocated, hierarchically decomposed, and instanti-
ated into the WM, which represents the executive state of the system: the WM collects
the set of activities currently under execution, including both complex and primitive
tasks. Additional details can be found in [4, 5].

Plan Execution. Each task can be associated with set of alternatives P1 , . . . , Pk each
representing a possible executable plan generated by a HTN planner. The alternative
plans are associated with suitable nodes allocated in the WM and connected with the
associated hierarchal task structure. When multiple conflicting behaviors are enabled,
the human operator guidance can be exploited to implicitly overcome the produced
impasse pointing the system towards the desired target. During the interaction, the WM
maintains the hierarchical structure of the allocated plans.


Integrating Robot and Human Guidance
The human interactive physical interventions are continuously interpreted in order to
estimate the associated intention and to accordingly adjust the robot collaborative be-
havior at different levels of abstraction: trajectories, targets, and tasks. The interpre-
tation of the human intention is obtained by the interaction of the High-level Control
System and the Operator Intention Estimation modules (see Figure 1). The first one
proposes possible targets for robotic manipulator which are consisted with the activities
in the WM; each possible target is evaluated by the Operator Intention Estimation con-
isdering the human physical guidance. The interpreted targets are then provided to the
Target Selector, whose outcome is sent to the Adaptive Shared Controller that suitably
adapts the robot behavior: when the human guidance is coherent with respect to the
tasks in WM and a shared target clearly emerges, the Adaptive Shared Controller pro-
vides a compliant robotic behavior. Otherwise, if the assessed intention is misaligned
or the current target is ambiguous, the Adaptive Shared Controller switches in a passive
mode to enable a comfortable human guidance.

Intention Classification. Given a target (and the trajectory to reach it) the human in-
terventions are classified into four possible (low-level) intentions. In a first case, We
have a Coinciding user guidance when it is coherent with both the target and trajectory.
The intervention is assessed as Deviating when the human aims at adjusting the robot
motion (e.g. in order to avoid an obstacle) without changing the final target. if the hu-
man intention is to contrast the robot motion (e.g. to stop or suspend the execution) we
have an Opposite intention, while when the opposition is aimed at switching towards a
different task/target we assess an Opposite Deviating intention. The intention classifica-
tion mechanism is based on a three layered feed-forward Neural Network that classifies
the aim of the human physical interventions from three input data: the magnitude of the
contact forces provided by the operator; the distance between the current position of
the end effector and the closest point to the planned trajectory; the deviation between
the planned and human motions (i.e. the angle between the two movement vectors). The
network povides the outcome on four nodes, each associated with the classes introduced
above: Coinciding, Deviating, Opposite, Opposite Deviating. Additional details about
this network can be found in [13, 2].

Target Selection. As already explained above, the multiple plans allocated in the WM
are hierarchically decomposed till the primitive operators. Each allocated primitive op-
erator, when enabled (i.e. all the associated preconditions satisfied), is associated with
a possible target point and trajectory which is assessed by theOperator Intention Esti-
mation and then sent to the Target Selector. More specifically, at each time stamp, all
the enabled primitive operators produce a target point and the associated intention esti-
mation, these are then exploited by the Target Selector to define both the current target
position along with the interaction mode for a compliant interaction. The target selec-
tion process works as follows. Whenever there is only one target with the associated
intention assessed as Coinciding or Deviating, that target is selected; otherwise, no
target is selected. The Target Selector couples each target with an operation mode that
coincides with the estimated intention in the case of Coinciding or Deviating; when
no target can be selected, the operation mode is set as P assive leaving the lead to the
human operator until a clear target is again selected in the next time stamps.

Adaptive Shared Controller. The Adaptive Shared Controller receives target positions
Xt and the operation mode (Coinciding, Deviating, P assive) from the High-Level
Control System to generate the motion data Xd needed to reach the target. During the
execution, the human exerts a force Ft on the end effector. Since the manipulator should
be adaptive with respect to the operator physical guidance, we exploit an admittance
controller, which is described by the second-order equation:

                         M Ẍdi + D(Ẋdi − Ẋci ) + K(Xdi − Xci ) + Ft
              Ẍci+1 =                                                 .              (1)
                                              M
with M , D and K representing, respectively, the desired virtual inertia, the virtual
damping and the virtual stiffness. The output of this module is the instant compliant
position Xc , representing the control command for the Position-Controlled System. De-
pending on the estimated target and the human intention, the robotic manipulator may
set a passive or an active mode. In the first case, the manipulator is fully compliant to
the operator interaction without providing any contribution to the task execution. In the
second case, the robot can assist the operator during the execution of the cooperative
task. The switch from a passive to an active mode is obtained by removing the virtual
stiffness from Eq. 1 and by setting to zero the desired acceleration and velocity. In-
stead, when the target is associated with a Coinciding or Deviating mode, the virtual
stiffness is set to a value higher than zero. In particular, when the operator intention is
interpreted as Coinciding the planned target point and the motion trajectories are main-
tained, along with the admittance parameters for cooperative manipulation. Instead,
when the operation mode is Deviating, a more docile behavior for the robot is needed.
In order to achieve this effect, while the operation mode is Deviating, the Adaptive
Shared Controller not only sets specific admittance parameters, but also generates in-
termediate target points between the final target position Xt and the closest point to the
planned path Cp . This intermediate target is updated until the operative mode changes
in order to smoothly guide the user towards the planned trajectory. When the manip-
ulator is guided back to the planned trajectory a Coinciding mode is activated again.
Similarly to [1], as a side effect of the robot compliant behavior, the operator receives
a force feedback from the robotic manipulator that provides a haptic perception of the
displacement between the current robot state and the planned one.


Pilot Study

The system has been assessed considering a simple assembly task that involves the hu-
man and the robot in the building of a small pyramid of objects as illustrated in in Figure
3 (left). As for the robotic platform, we exploited a KUKA LWR IV+, equipped with a
WSG50 2-fingers gripper in a table-top workspace of 50 × 70 cm. In this experiment,


                                              Alive
                                              Alive
                                                             TURE
                                                             plan1
                                                             plan1

                                          TRUE               base(b1)        base(b1) ∧ base(b2)
                                     assemble(b1,base)   assemble(b2,base)
                                                         assemble(b2,base)    assemble(b3,base)
                                                                              assemble(b3,base)
                                         base(b1)            base(b2)             base(b3)

                                                             TURE
                                                             plan1
                                                             plan1

                                          TRUE               base(b2)        base(b1) ∧ base(b2)
                                     assemble(b2,base)
                                     assemble(b2,base)   assemble(b1,base)
                                                         assemble(b1,base)    assemble(b3,base)
                                                                              assemble(b3,base)
                                         base(b2)            base(b1)             base(b3)



Fig. 3. (Left) Experimental setup for the assembly-task: it comprises three colored blocks and a
support; the blocks have to be composed on the support to create a pyramid. (Right) The two
plans allocated in the WM represent two ways to accomplish the task.


two alternative plans can be executed to accomplish the task (Figure 3) and the actual
plan/action selection process depends on the users physical guidance during the interac-
tion. Our aim was to test whether the proposed plan guidance supports cooperative task
execution and enhances the accuracy of intention estimation. We involved three users
in the tests; each tester executed four times the task with or without the plan guidance.
Despite the simplicity of the task, we observed a clear impact of the plan guidance on
the intention estimation accuracy (0.962% with plan guidance vs 0.575% without) and
an associated speed-up in task execution (up to 59.6% of speed-up). Additional details
about the experimental results can be found in [2].


Conclusions

We presented a framework that integrates interactive plan execution and physical human-
robot interaction in order to enable the execution of complex co-manipulation tasks. We
assume that system is endowed with hierarchically represented tasks that can be exe-
cuted exploiting the human physical guidance. In contrast with alternative approaches
to physical human-robot interaction, in the proposed framework the operator physical
guidance is interpreted in the context of a structured collaborative task. In this setting,
during the interactive manipulation, the user interventions are continuously assessed
with respect to the possible alternative tasks/activities proposed by the plan in order to
infer trajectory deviations and task switches. The robotic compliance is then suitably
regulated. The proposed framework has been demonstrated in a real world testing sce-
nario in which a user interacts with a lightweight manipulator in order to accomplish a
simple assembly tasks. The collected results suggest that the system is more effective
when the plan guidance is active, with a positive impact on both the time to executed
the task and the classification performance.

Acknowledgement. The research leading to these results has been supported by the ERC
AdG-320992 RoDyMan, H2020-ICT-731590 REFILLs, MISE ROMOLO, and DIETI
MARsHaL.


References

 1. Cacace, J., Finzi, A., Lippiello, V.: A mixed-initiative control system for an aerial service
    vehicle supported by force feedback. In: ICRA 2014. pp. 1230–1235 (2014)
 2. Cacace, J., Caccavale, R., Finzi, A., Lippiello, V.: Interactive plan execution during human-
    robot cooperative manipulation. IFAC-PapersOnLine 51, 500–505 (2018)
 3. Caccavale, R., Cacace, J., Fiore, M., Alami, R., Finzi, A.: Attentional supervision of human-
    robot collaborative plans. In: RO-MAN 2016. pp. 867–873 (2016)
 4. Caccavale, R., Finzi, A.: Flexible task execution and attentional regulations in human-robot
    interaction. IEEE Trans. on Cog. and Devel. Systems 9(1), 68–79 (2017)
 5. Caccavale, R., Saveriano, M., Finzi, A., Lee, D.: Kinesthetic teaching and attentional super-
    vision of structured tasks in human–robot interaction. Autonomous Robots (2018)
 6. Clodic, A., Cao, H., Alili, S., Montreuil, V., Alami, R., Chatila, R.: SHARY: A supervision
    system adapted to human-robot interaction. In: ISER. Springer Tracts in Advanced Robotics,
    vol. 54, pp. 229–238. Springer (2008)
 7. Colgate, J.E., Edward, J., Peshkin, M.A., Wannasuphoprasit, W.: Cobots: Robots for col-
    laboration with human operators. In: ASME Dynamic Systems and Control Division. pp.
    433–439 (1996)
 8. De Santis, A., Siciliano, B., Luca, A., Bicchi, A.: An atlas of physical human-robot interac-
    tion. Mechanism and Machine Theory 43(3), 253–270 (2007)
 9. Hoffman, G., Breazeal, C.: Collaboration in human-robot teams. In: AIAA Intelligent Sys-
    tems Technical Conf. (2004)
10. Hoffman, G., Breazeal, C.: Effects of anticipatory action on human-robot teamwork effi-
    ciency, fluency, and perception of team. In: HRI 2007. pp. 1–8 (2007)
11. Iengo, S., Rossi, S., Staffa, M., Finzi, A.: Continuous gesture recognition for flexible human-
    robot interaction. In: 2014 IEEE ICRA. pp. 4863–4868 (2014)
12. Johannsmeier, L., Haddadin, S.: A Hierarchical Human-Robot Interaction-Planning Frame-
    work for Task Allocation in Collaborative Industrial Assembly Processes. IEEE Robotics
    and Automation Letters 2(1), 41–48 (Jan 2017)
13. Jonathan Cacace, Alberto Finzi, V.L.: Enhancing shared control via contact force classifica-
    tion in human-robot cooperative task execution. In: Human Friendly Robotics, pp. 167–179.
    SPAR, Springer (2018)
14. Karpas, E., Levine, S.J., Yu, P., Williams, B.C.: Robust execution of plans for human-robot
    teams. In: ICAPS 2015. pp. 342–346. AAAI Press (2015)
15. Li, Y., Tee, K.P., Chan, W.L., Yan, R., Chua, Y., Limbu, D.K.: Continuous Role Adaptation
    for Human Robot Shared Control. IEEE Trans. on Robotics 31(3), 672–681 (2015)
16. Park, J.S., Park, C., Manocha, D.: Intention-aware motion planning using learning based
    human motion prediction. CoRR abs/1608.04837 (2016)
17. Peternel, L., Babic, J.: Learning of compliant human-robot interaction using full-body haptic
    interface. Advanced Robotics 27, 1003–1012 (2013)
18. Rossi, S., Leone, E., Fiore, M., Finzi, A., Cutugno, F.: An extensible architecture for robust
    multimodal human-robot communication. In: 2013 IEEE/RSJ IROS. pp. 2208–2213 (2013)
19. Shah, J., Wiken, J., Williams, B., Breazeal, C.: Improved human-robot team performance
    using chaski, a human-inspired plan execution system. In: HRI 2011. pp. 29–36 (2011)
20. Sidobre, D., Broquère, X., Mainprice, J., Burattini, E., Finzi, A., Rossi, S., Staffa, M.:
    Human-robot interaction. In: Advanced Bimanual Manipulation - Results from the DEX-
    MART Project, pp. 123–172 (2012)
21. Sisbot, E.A., Marin-Urias, L.F., Alami, R., Simeon, T.: A human aware mobile robot motion
    planner. IEEE Transactions on Robotics 23(5), 874–883 (2007)
22. Vernon, D., Vincze, M.: Industrial priorities for cognitive robotics. In: EUCognition. CEUR
    Workshop Proceedings, vol. 1855, pp. 6–9. CEUR-WS.org (2016)