<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Attentional Plan Execution for Human-Robot Cooperation</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Jonathan Cacace</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Riccardo Caccavale</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Michelangelo Fiore</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Rachid Alam</string-name>
          <email>rachid.alamig@laas.fr</email>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Alberto Finzi</string-name>
          <email>alberto.finzig@unina.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Universita degli Studi di Napoli Federico II</institution>
        </aff>
      </contrib-group>
      <abstract>
        <p>In human-robot interactive scenarios communication and collaboration during task execution are crucial issues. Since the human behavior is unpredictable and ambiguous, an interactive robotic system is to continuously interpret intentions and goals adapting its executive and communicative processes according to the users behaviors. In this work, we propose an integrated system that exploits attentional mechanisms to exibly adapt planning and executive processes to the multimodal human-robot interaction.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>
        In social robotics, exible and natural interaction with humans is often needed
in the context of structured collaborative tasks. In these scenarios, the robotic
system should be capable of adapting the execution of cooperative plans with
respect to complex human activities and interventions. Many mechanisms are
indeed involved in humans cooperation [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], such as joint attention, action
observation, task-sharing, and action coordination [
        <xref ref-type="bibr" rid="ref15 ref19">19, 15</xref>
        ]. Furthermore,
communication between humans involve di erent modalities such as speech, gaze
orientation, gestures [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. Several systems manage the human-robot cooperation by
planning (totally or partially) the action sequence for the agents involved in the
interaction [
        <xref ref-type="bibr" rid="ref11 ref12 ref20">11, 12, 20</xref>
        ]; however, planning and replanning processes can be
timeexpensive and can therefore impair the naturalness and the e ectiveness of the
interaction. In order to exibly combine and orchestrate structured activities and
reactive actions we exploit the concept of cognitive control proposed by the
cognitive neuroscience literature [
        <xref ref-type="bibr" rid="ref17 ref2">17, 2</xref>
        ]. Inspired by supervisory attentional system
and contention scheduling models [
        <xref ref-type="bibr" rid="ref17 ref2 ref9">17, 9, 2</xref>
        ], we propose an integrated framework
that combines planning, attentional regulations, and multimodal human-robot
interaction in order to exibly adapt plan execution according to the executive
context and the multimodal intraction processes.
      </p>
      <p>
        Multimodal Interaction and Dialogue management. The multimodal HRI
framework is appointed to recognize the multiple human commands and actions, such
as utterances, gaze directions, gestures or body postures, and to provide an
interpretation of users intentions according to the dialogue context. Following the
approach proposed by [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ], the multimodal recognition process is composed of
three layers: the lower layer contains the classi ers of the single modalities; the
middle layer, the fusion engine, exploits a Support Vector Machine (SVM)-based
late fusion and provides an integration of the multiple inputs; the upper layer,
the dialogue manager [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ], integrates the representation of the dialogue modelled
as a Partially Observable Markov Decision Process (POMDP) and accomplishes
the semantic interpretation of observations according to the context and the
inner knowledge. The main feature of such structure is that the results of each
layer are N-best lists of possible interpretations, which are fed to the next layer
to solve the ambiguities in cascade.
      </p>
      <p>
        Human Aware Task Planning. The system is endowed with a Human-Aware
Task Planner (HATP) [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] which is based on a Hierarchical Task Networks
(HTN) and a SHOP-like [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ] re nement process. HATP is able to produce
hierarchical plans for multi-agent systems (including humans), generating di erent
sequences for each agent. Each action has a nite number of precondition links
to other actions (which can be part of any sequence), allowing HATP to
generate collaborative subtasks where more agents are involved. Furthermore, the
actions of the domain can be associated with a duration and a cost function,
while speci c social rules could be de ned along with a cost for their violation.
By setting a di erent range of parameters the plans can be tuned to adapt the
robot behavior to the desired level of cooperation.
      </p>
      <p>
        Executive System. The executive process is managed by two subsystems: the
supervision system [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] and the the attentional system [
        <xref ref-type="bibr" rid="ref6 ref7 ref8">8, 6, 7</xref>
        ]. The rst one
is to interact with the task planner, monitor the plan execution and formulate
replanning requests. The second one exploits bottom-up (stimuli-oriented) and
top-down (task-oriented) in uences to regulate the plan execution and the
dialogue policy. The attentional system manages a cognitive control cycle that
continuously updates the systems working memory (WM) and a set of
attentional behaviors (BP) exploiting the task structure de ned in a long term
memory (LTM). Each task to be executed is associated with a hierachical structure
which is allocated in working memory, the leaves of this structure are concrete
attentional behaviors whose activations are top-down and bottom-up regulated
by the attentional in uences (see [
        <xref ref-type="bibr" rid="ref5 ref8">8, 5</xref>
        ]). In this context, the dialogue policies are
assimilated to special interactive behaviors, while the generated human aware
plans are exploited as a guidance for the action selection and execution.
3
      </p>
    </sec>
    <sec id="sec-2">
      <title>Case Studies</title>
      <p>
        The proposed architecture has been tested in a case study inspired by a
humanrobot co-working domain [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] proposed in the context of the SAPHARI project
(FP7-ICT-287513).
      </p>
      <p>The environment is set as follows: there are three work locations, each containing
a slot (slot1, slot2, slot3) and a table that supports a set of objects including a
glue bottle (glueBottle) and some brackets (bracket1, bracket2). The user and
the robot must cooperatively install the brackets in the slots, by rst cleaning the
slot, then applying the glue in the area, and nally xing the bracket on it. In this
context, the integrated system can exibly adapt plan execution to unexpected
behaviors of the human, avoiding the computation of a new plan while enabling
a smoother and more natural interaction. In the following, we provide some
examples where the proposed systems allows for overcoming impasses during
the interaction.</p>
      <p>Handover to Search. In a rst scenario, the robot is waiting for a bracket from
the human in order to install it. In the planned sequence the human should
bring the object to the robot, however, the human remains idle and does not
interact as expected. According to the plan, the robot should keep waiting for
the human, however, the attentional system comes into play: in the absence of
external stimuli, the activations of the receive behavior decreases with time.
Therefore, after some seconds of waiting, since the human does not cooperate in
the task, the activations of the search behavior become dominant with respect
to receive, hence the robot can abandon the handover behavior and can start
searching for the object by itself.</p>
      <p>Take to Search. In a second scenario, we consider the case of a robot that should
get the bracket1 and give it to the human, which is to x it on a slot. As suggested
by the plan, the robot travels to the table in order to get the object, but it is not
there. Analogously, to the previous case, since no environmental stimulation is
present for the take action, the attentional system switches to a search behavior
where the robot inspects other locations looking for bracket1. As soon as the
object is found, the take action becomes dominant again allowing the robot to
continue the plan.</p>
      <p>Handover to Place. In a third scenario, we assume that the human is to obtain
the glueBottle in order to glue slot1. Following the plan, the robot tries to
perform an handover, but the human moves away from the working space (see
Fig. 2). The supervision system cannot replan since the human has not performed
its action and the plan is still valid. However, the attentional system can solve
the impasse without waiting for the human initiative. Indeed, the bottom-up
stimulation of the give decreases as the robot-human distance increases, while
the place activations remain stable (depending on the table position). When
place wins the contention, the robot places the object on the work location
allowing for plan continuation. Once the place is accomplished, the supervision
system can manage the action substitution by changing the monitored human
action from receive to take.</p>
    </sec>
    <sec id="sec-3">
      <title>ACKNOWLEDGMENT</title>
      <p>The research leading to these results has been supported by the RoDyMan and
SAPHARI projects, which have received funding from the European Research
Council under Advanced Grant agreement number 320992 and 287513,
respectively. The authors are solely responsible for its content. It does not represent
the opinion of the European Community and the Community is not responsible
for any use that might be made of the information contained therein.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Alami</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gharbi</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Vadant</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lallement</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Suarez</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>On human-aware task and motion planning abilities for a teammate robot</article-title>
          .
          <source>In: RSS HRCIM</source>
          (
          <year>2014</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Botvinick</surname>
            ,
            <given-names>M.M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Braver</surname>
            ,
            <given-names>T.S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Barch</surname>
            ,
            <given-names>D.M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Carter</surname>
            ,
            <given-names>C.S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cohen</surname>
            ,
            <given-names>J.D.</given-names>
          </string-name>
          :
          <article-title>Con ict monitoring and cognitive control</article-title>
          .
          <source>Psychological review 108(3)</source>
          ,
          <volume>624</volume>
          (
          <year>2001</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Bratman</surname>
          </string-name>
          , Michael, E.:
          <article-title>Shared agency. Philosophy of the social sciences: philosophical theory and scienti c practice pp</article-title>
          .
          <volume>41</volume>
          {
          <issue>59</issue>
          (
          <year>2009</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Breazeal</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          : Designing Sociable Robots. MIT Press (
          <year>2002</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Broquere</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Finzi</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mainprice</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rossi</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sidobre</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sta</surname>
            <given-names>a</given-names>
          </string-name>
          ,
          <string-name>
            <surname>M.:</surname>
          </string-name>
          <article-title>An attentional approach to human-robot interactive manipulation</article-title>
          .
          <source>I. J. Social Robotics</source>
          <volume>6</volume>
          (
          <issue>4</issue>
          ),
          <volume>533</volume>
          {
          <fpage>553</fpage>
          (
          <year>2014</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Caccavale</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Finzi</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Plan execution and attentional regulations for exible human-robot interaction</article-title>
          .
          <source>In: Proc. of SMC</source>
          <year>2015</year>
          (
          <year>2015</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Caccavale</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Finzi</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lucignano</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rossi</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sta</surname>
            <given-names>a</given-names>
          </string-name>
          , M.:
          <article-title>Attentional top-down regulation and dialogue management in human-robot interaction</article-title>
          .
          <source>In: Proc. of HRI 2014</source>
          . pp.
          <volume>129</volume>
          {
          <issue>130</issue>
          (
          <year>2014</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Caccavale</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Leone</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lucignano</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rossi</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sta</surname>
            <given-names>a</given-names>
          </string-name>
          , M.,
          <string-name>
            <surname>Finzi</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Attentional regulations in a situated human-robot dialogue</article-title>
          .
          <source>In: RO-MAN</source>
          . pp.
          <volume>844</volume>
          {
          <issue>849</issue>
          (
          <year>2014</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Cooper</surname>
            ,
            <given-names>R.P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Shallice</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          :
          <article-title>Hierarchical schemas and goals in the control of sequential behavior</article-title>
          .
          <source>Psychological Review</source>
          <volume>113</volume>
          (
          <issue>4</issue>
          ),
          <volume>887</volume>
          {
          <fpage>916</fpage>
          (
          <year>2006</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Fiore</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Clodic</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Alami</surname>
          </string-name>
          , R.:
          <article-title>On planning and task achievement modalities for human-robot collaboration</article-title>
          .
          <source>In: International Symposium on Experimental Robotics</source>
          , Marrakech/Essaouira, June. vol.
          <volume>1518</volume>
          , p.
          <year>2014</year>
          (
          <year>2014</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Fong</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kunz</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hiatt</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bugajska</surname>
            ,
            <given-names>M.:</given-names>
          </string-name>
          <article-title>The human-robot interaction operating system</article-title>
          .
          <source>In: Proc. of HRI 2006</source>
          . pp.
          <volume>41</volume>
          {
          <issue>48</issue>
          (
          <year>2006</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Karpas</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Levine</surname>
            ,
            <given-names>S.J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yu</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Williams</surname>
            ,
            <given-names>B.C.</given-names>
          </string-name>
          :
          <article-title>Robust execution of plans for human-robot teams</article-title>
          .
          <source>In: Proc. of ICAPS-15</source>
          . pp.
          <volume>342</volume>
          {
          <issue>356</issue>
          (
          <year>2015</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Lallement</surname>
            , R., de Silva,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Alami</surname>
          </string-name>
          , R.:
          <article-title>HATP: An HTN Planner for Robotics</article-title>
          .
          <source>In: 2nd ICAPS Workshop on Planning and Robotics</source>
          ,
          <string-name>
            <surname>PlanRob</surname>
          </string-name>
          <year>2014</year>
          (
          <year>2014</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Lucignano</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cutugno</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rossi</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Finzi</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>A dialogue system for multimodal human-robot interaction</article-title>
          .
          <source>In: Proc. of ICMIa</source>
          . pp.
          <volume>197</volume>
          {
          <issue>204</issue>
          (
          <year>2013</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>Mutlu</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Terrell</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Huang</surname>
            ,
            <given-names>C.M.:</given-names>
          </string-name>
          <article-title>Coordination mechanisms in human-robot collaboration</article-title>
          .
          <source>In: ACM/IEEE Intl. Conf. on Human-Robot Interaction (HRI)- Workshop on Collaborative Manipulation</source>
          . pp.
          <volume>1</volume>
          {
          <issue>6</issue>
          (
          <year>2013</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <surname>Nau</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cao</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lotem</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <article-title>Mun~oz-</article-title>
          <string-name>
            <surname>Avila</surname>
          </string-name>
          , H.:
          <article-title>Shop: Simple hierarchical ordered planner</article-title>
          .
          <source>In: Proceedings of the 16th international joint conference on Arti cial intelligence-Volume</source>
          <volume>2</volume>
          . pp.
          <volume>968</volume>
          {
          <issue>973</issue>
          (
          <year>1999</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17.
          <string-name>
            <surname>Norman</surname>
            ,
            <given-names>D.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Shallice</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          :
          <article-title>Attention to action: Willed and automatic control of behavior</article-title>
          . In:
          <article-title>Consciousness and self-regulation: Advances in research and theory</article-title>
          , vol.
          <volume>4</volume>
          , pp.
          <volume>1</volume>
          {
          <issue>18</issue>
          (
          <year>1986</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          18.
          <string-name>
            <surname>Rossi</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Leone</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fiore</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Finzi</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cutugno</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          :
          <article-title>An extensible architecture for robust multimodal human-robot communication</article-title>
          .
          <source>In: In Proc. of IROS-2013</source>
          . pp.
          <volume>2208</volume>
          {
          <issue>2213</issue>
          (
          <year>2013</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          19.
          <string-name>
            <surname>Sebanz</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bekkering</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Knoblich</surname>
          </string-name>
          , G.:
          <article-title>Joint action: bodies and minds moving together</article-title>
          .
          <source>Trends in cognitive sciences 10(2)</source>
          ,
          <volume>70</volume>
          {
          <fpage>76</fpage>
          (
          <year>2006</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          20.
          <string-name>
            <surname>Shah</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wiken</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Williams</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Breazeal</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          :
          <article-title>Improved human-robot team performance using chaski, a human-inspired plan execution system</article-title>
          .
          <source>In: Proc. of HRI-11</source>
          . pp.
          <volume>29</volume>
          {
          <issue>36</issue>
          (
          <year>2011</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>