<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>RE.</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <article-id pub-id-type="doi">10.1145/3313831.3376825</article-id>
      <title-group>
        <article-title>Human-Drone Interactions with Semi-Autonomous Cohorts of Collaborating Drones</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Jane Cleland-Huang</string-name>
          <email>JaneHuang@nd.edu</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Ankit Agrawal</string-name>
          <email>aagrawa2@nd.edu</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>University of Notre Dame</institution>
          ,
          <addr-line>Notre Dame, IN 46556</addr-line>
          ,
          <country country="US">USA</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2018</year>
      </pub-date>
      <volume>00034</volume>
      <fpage>361</fpage>
      <lpage>365</lpage>
      <abstract>
        <p>Research in human-drone interactions has primarily focused on cases in which a person interacts with a single drone as an active controller, recipient of information, or a social companion; or cases in which an individual, or a team of operators interacts with a swarm of drones as they perform some coordinated flight patterns. In this position paper we explore a third scenario in which multiple humans and drones collaborate in an emergency response scenario. We discuss different types of interactions, and draw examples from current DroneResponse project.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>This paper is published under the Creative Commons Attribution 4.0 International
(CC-BY 4.0) license. Authors reserve their rights to disseminate the work on their
personal and corporate Web sites with the appropriate attribution.
Interdisciplinary Workshop on Human-Drone Interaction (iHDI 2020)
CHI ’20 Extended Abstracts, 26 April 2020, Honolulu, HI, US
© Creative Commons CC-BY 4.0 License.</p>
    </sec>
    <sec id="sec-2">
      <title>Author Keywords</title>
      <p>Human-drone collaboration, emergency response</p>
    </sec>
    <sec id="sec-3">
      <title>Introduction</title>
      <p>Small Unmanned Aerial Systems, which we refer to here as
drones, can be effectively deployed to support emergency
responders for diverse scenarios such as
search-andrescue, accident surveillance, and flood inspections.
Currently, emergency responders tend to operate drones
manually or using off-the-shelf applications that allow them to
preprogram sets of waypoints. However, equipping drones
with onboard intelligence allows them to perform tasks
autonomously and to contribute more fully to the emergency
response.</p>
      <p>
        In our DroneResponse project we are designing and
developing a system to deploy and coordinate the efforts of
multiple semi-autonomous drones for use in emergency
situations [
        <xref ref-type="bibr" rid="ref1">1, 3</xref>
        ]. Our vision is for humans and drones to work
closely together as part of a complex mission – for
example to monitor air quality following a chemical explosion, to
perform search and rescue, to deliver medical supplies, or
to support firefighters during structural fires. As depicted
in Figure 1, there are several facets to human-drone
interaction in such scenarios. Humans need to communicate
mission goals and directives to groups of drones as well as
to individuals, while drones need to keep humans informed
of their current state and progress, and at times, need to
seek permission or guidance to perform specific tasks. In
addition, both humans and drones need to communicate
between themselves (i.e., drone-to-drone and
human-tohuman) to coordinate their activities.
      </p>
    </sec>
    <sec id="sec-4">
      <title>An Interaction Example</title>
      <p>We provide examples of such interactions in the sequence
diagram depicted in Figure 2. The Incident Commander
first defines a search area and sends a request to the hive
controller to start the search (E1). This is an example of
human-to-drone interactions (H2D). The hive-controller then
creates a search plan and assigns search routes to drones
(E2). The coordination between drones represents
droneto-drone (D2D) interaction. In the modeled sequence of
actions, Dronen detects a potential drowning victim. It then
notifies the human incident commander and starts
streaming annotated video to the ground (E3), thereby illustrating
drone-to-human communication (D2H). The part of the
sequence diagram highlighted in yellow provides an example
of a more complex bi-directional human-drone
conversation in which the drone uses its sensing (image detection)
abilities to detect a victim. It then autonomously switches to
track-victim mode, raises a victim-found alert, and streams
annotated video to the incident commander. Finally, the
incident commander uses the information relayed by the
drone, to confirm the victim sighting and to push
information from the drone to a physical rescue team (E4). This
final step is an example of human-to-human (H2H)
interaction, triggered by the initial D2H exchange. This sequence
of events illustrates the complex socio-technical aspects of
emergent multi-user, multi-drone interaction spaces.</p>
    </sec>
    <sec id="sec-5">
      <title>Drone-to-Human Communication (D2H)</title>
      <p>
        There are numerous challenges that must be addressed in
order to achieve efficient human-drone collaboration. In our
concurrently published work [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], we have focused on
designing a user interface that enables situational-awareness
(SA) [5] for human first-responders. SA involves
perception (i.e., recognizing and monitoring), comprehension
(i.e., interpreting and synthesizing information), and
projection (i.e., understanding the situation, projecting future
outcomes) so that a user can make effective and
actionable decisions. The key to designing D2H communication
is identifying information that is needed by different user
roles within specific contexts. As an example, a drone may
be ascribed the ability to autonomously decide its speed
and altitude during a search. If visibility is good, the drone
might fly higher and faster in order to cover the search area
more quickly, while still returning accurate results. On the
other hand, if visibility is lower, the drone might need to fly
lower and slower, and adapt its flight plan to compensate
for a reduced field of view. In this scenario, the operator
needs visual cues and awareness of why a drone behaves
as it does. As an outcome of a four month co-design
process with our local fire department, we identified two design
strategies to address this specific scenario. First, we
designed our DroneResponse GUI to depict any
environmental factors that were likely to impact drone behavior – for
example, low visibility, high-winds, or prohibited airspace.
Initiate search
Evaluate Victim
      </p>
      <p>claim
Confirm Victim</p>
      <p>Found
streamImage()
confirmTrack()
streamImage()
missionCompleted()</p>
      <p>End Mission
initiateRescue
(GPS Coordinates,
Imagery) E4</p>
      <p>Confirm
Rescue()</p>
      <p>Rescuer
Initiate Rescue
Rescue Victim</p>
      <p>
        Secondly, we made the drones explain themselves upon
demand by describing their current strategies and
permissions. In the case of searching for a victim in inclement
weather, the drone might explain “flying lower than
normal at 10 meters due to low visibility” or “searching river
banks at a greater distance than normal due to high wind
gusts and moving branches.” We report outcomes from our
co-design experience, especially with respect to D2H
interactions and achieving situational awareness in our related
paper [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
      </p>
    </sec>
    <sec id="sec-6">
      <title>Human to Drone Mission Directives</title>
      <p>
        Achieving effective H2D communication is challenging in
systems with multiple drones and complex missions.
Several research groups have explored ways to specify drone
missions using formal commands, often embedded in
domain specific languages [
        <xref ref-type="bibr" rid="ref2">7</xref>
        ]. However, it is infeasible for
emergency responders to write such mission
specifications under stressful time-constraints of a life-and-death
response. A user interface is therefore needed that
enables quick mission planning and configuration and which
supports high-level directives addressed to the cohort of
drones, as well as specific directives addressed to
individual drones.
      </p>
      <p>
        Researchers have previously explored diverse solutions
for issuing commands to drones, such as the use of
gestures and voice commands [
        <xref ref-type="bibr" rid="ref5">6, 10</xref>
        ] or airplane-like cockpits
for controlling large military-style drones [
        <xref ref-type="bibr" rid="ref3">8</xref>
        ]. We have
prototyped the use of gestures and voice commands;
however, they have several shortcomings that inhibit their use
in emergency response scenarios. Voice commands, while
appealing, are impractical due to the noise inherent to a
rescue scene. This includes sirens, constant radio-chatter,
and now the additional noise of drone motors. Gestures
are similarly impractical. They have been shown to work
effectively in controlled near-distance environments, which
is far from the case for an emergency response scenario
[2]. Furthermore, they introduce significant room for error,
which is unacceptable in an emergency response
environment, where mistakes could cost the lives of both the
victims and the rescuers. Domain experts, collaborating in
the co-design of DroneResponse soundly ruled out either
of these approaches [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. We therefore have opted to create
a GUI-based solution for HD2 commands for emergency
response missions.
      </p>
      <p>In our GUI, which is currently under development, users can
initially select a mission type from a high-level list of
missions as depicted in Figure 9. They then perform a series
of configurations such as marking a search area. Each
predefined mission type will have a corresponding underlying
mission plan with configuration points. This plan is
sufficient for allowing the mission to proceed through a series
of predefined stages and tasks (e.g., search, track, return
home). However, users will also need to configure or tweak
the mission dynamically as it evolves, by providing
additional directives.</p>
      <p>
        Each of these directives must specify who, what, where,
and how a task is to be accomplished. ‘Who’ refers to whether
the command is addressed to the entire cohort or an
individual drone. In the case of the cohort, then the hive
coordination is empowered to autonomously figure out which
drones are best fit to respond. ’Who’ could also be
specified with constraints such as 3 drones or drones with
thermal cameras onboard. Finally ‘who’ could be addressed to
a specific drone if it were the case, that the Incident
Commander wished to assign a task to a specific drone. This is
more risky, as the selected drone might be unfit for service
(e.g., due to low battery or current critical service). ‘What’
refers to the specific task to be completed – for example
reconnaissance, delivery, or serving as a communication
relay if drones are communicating using onboard
communication channels such as ad-hoc wifi. ‘Where’ refers to a region
or point of interest defined by GPS coordinates. For
example, in the case of reconnaissance, the user might need to
direct the drone to a certain part of a wooded river bank
where somebody has sighted a piece of clothing; while in
the case of establishing a communication relay, the user
could either specify coordinates or allow it to dynamically
position itself so as to optimize communication between all
drones. Finally, ‘how’ enables specific directives for how the
task is to be completed. In some cases, the drones could
be given significant autonomy to complete well-defined
tasks, while in other cases more specific guidelines might
be required. We are currently working closely with several
emergency response organizations to better discover their
needs and to formally model diverse mission plans.
The GUI therefore provides a human-facing interface to an
underlying mission plan specified using a more formal
approach such as belief-desire-intent [
        <xref ref-type="bibr" rid="ref4">9</xref>
        ]. Drones are able to
interpret the more formal specification. In our initial
prototype we are experimenting with a flow-chart of buttons to
enable humans to configure mission directives in a known
space of options. These are depicted in Figures 6-8.
      </p>
    </sec>
    <sec id="sec-7">
      <title>GUI versus Physical Devices</title>
      <p>The discussion in this position paper has focused almost
entirely on human-drone interfaces based on the use of
graphical interfaces; however, drones can also be controlled
using physical hand-held devices. In a multi-drone scenario,
humans might need to switch between graphical and
physical interfaces for several reasons including taking manual
control of a malfunctioning drone or temporarily using
manual controls for a specific task that is currently beyond the
capabilities of a drone to perform autonomously. Our prior
work has shown that misalignment of GUI’s and physical
controllers can easily lead to accidents [3] (see Figure 10.
For example, when control is passed from a computer to a
handheld device, the physical switches on the hand-held
controller must be set to stable ’flightmode’ positions,
otherwise accidents, including crashlandings, could occur.
Furthermore, when humans take-over control of remote
drones, it can be exceedingly difficult to figure out which
direction the drone is facing. Commands are interpreted
relative to the drones position, which means that issuing a
‘forward’ command would cause the drone to fly forward, but
without clear orientation from the remote-pilot’s perception,
that could actually be in any direction. A simple design
solution might be to provide a feature to autonomously reorient
the drone with respect to the remote pilot so that physical
and GUI controls become aligned relative to the drone’s
and pilot’s positions. Given this reorientation, a forward
command would then consistently send the drone away
from the pilot, and a moveRight command would make it
move right. For deployment in emergency situations, more
though should be invested in the use of both graphical and
physical interfaces, the interactions between them, and
transitions of control across devices and between different
operators.</p>
    </sec>
    <sec id="sec-8">
      <title>Conclusion</title>
      <p>
        This position paper has presented an informal framework
for considering human-drone interactions along the
dimensions of H2D, D2H, D2D, and H2H communication in
multiuser, multi-drone environments where drones are permitted
to operate with some degree of autonomy. We have
described some of the challenges we are facing in the design
of DroneResponse and some initial ideas for addressing
those challenges. Our prior work [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] has focused primarily
on the D2H challenge of supporting situational awareness,
while our ongoing work focuses on providing a meaningful
interface for more complex bidirectional human and drone
interactions.
      </p>
    </sec>
    <sec id="sec-9">
      <title>Acknowledgements</title>
      <p>The work described in this position paper has primarily
been funded by the National Science Foundation under
grants CNS-1737496, CNS-1931962, CCF-1647342, and
CCF-1741781. We also thank the Firefighters of South
Bend for closely collaborating on the DroneResponse project.
[6] Markus Funk. 2018. Human-drone interaction: Let’s
get ready for flying user interfaces! Interactions 25, 3
(2018), 78–81. DOI:
http://dx.doi.org/10.1145/3194317</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>A.</given-names>
            <surname>Agrawal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Abraham</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Burger</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Christine</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Fraser</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Hoeksema</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Hwang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Travnik</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Kumar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Scheirer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Cleland-Huang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Vierhauser</surname>
          </string-name>
          , R.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>Sergio</given-names>
            <surname>García</surname>
          </string-name>
          , Patrizio Pelliccione, Claudio Menghi, Thorsten Berger, and
          <string-name>
            <given-names>Tomas</given-names>
            <surname>Bures</surname>
          </string-name>
          .
          <year>2019</year>
          .
          <article-title>High-Level Mission Specification for Multiple Robots</article-title>
          .
          <source>In Proceedings of the 12th ACM SIGPLAN International Conference on Software Language Engineering (SLE</source>
          <year>2019</year>
          ).
          <article-title>Association for Computing Machinery</article-title>
          , New York, NY, USA,
          <fpage>127</fpage>
          -
          <lpage>140</lpage>
          . DOI: http://dx.doi.org/10.1145/3357766.3359535
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>Alan</given-names>
            <surname>Hobbs</surname>
          </string-name>
          and
          <string-name>
            <given-names>B.</given-names>
            <surname>Lyall</surname>
          </string-name>
          .
          <year>2016</year>
          .
          <article-title>Human Factors Guidelines for Unmanned Aircraft Systems</article-title>
          .
          <source>Ergonomics in Design: The Quarterly of Human Factors Applications</source>
          <volume>24</volume>
          (04
          <year>2016</year>
          ). DOI: http://dx.doi.org/10.1177/1064804616640632
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [9]
          <string-name>
            <surname>Anand</surname>
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Rao</surname>
            and
            <given-names>Michael P.</given-names>
          </string-name>
          <string-name>
            <surname>Georgeff</surname>
          </string-name>
          .
          <year>1995</year>
          .
          <article-title>BDI Agents: From Theory to Practice</article-title>
          .
          <source>In Proceedings of the First International Conference on Multiagent Systems, June 12-14</source>
          ,
          <year>1995</year>
          , San Francisco, California, USA.
          <fpage>312</fpage>
          -
          <lpage>319</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>D.</given-names>
            <surname>Tezza</surname>
          </string-name>
          and
          <string-name>
            <given-names>M.</given-names>
            <surname>Andujar</surname>
          </string-name>
          .
          <year>2019</year>
          .
          <article-title>The State-of-the-Art of Human-Drone Interaction: A Survey</article-title>
          .
          <source>IEEE Access</source>
          <volume>7</volume>
          (
          <year>2019</year>
          ),
          <fpage>167438</fpage>
          -
          <lpage>167454</lpage>
          . DOI: http://dx.doi.org/10.1109/ACCESS.
          <year>2019</year>
          .2953900
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>