<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>A Proposed Risk Categorisa1 on Model for Human-Machine Teaming</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Zena Assaad</string-name>
          <email>zena.assaad@anu.edu.au</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>The Australian National University</institution>
          ,
          <addr-line>Canberra</addr-line>
          ,
          <country country="AU">Australia</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Autonomous systems are becoming more prevalent across a diversity of industries and applications. The development and deployment of these systems is surpassing the promulgation of standards and regulations needed to govern their safety. As the potential applications of autonomous systems continue to broaden, segregating these systems from humans will become increasingly difficult and potentially not feasible in some contexts, such as human-machine teaming (HMT). A mechanism for categorising risk for HMT operations against levels of autonomy (LOA) and machine functions is proposed. The risk categorisation tool sits within a broader safety framework for HMT. The user centric framework will enable the safe operation of humans alongside machines in a teaming environment in which the machine will not be physically segregated from the human. A key factor to effective safety assurance is proportionality. Autonomous capabilities can vary widely for HMT operations, resulting in varying levels of risk. The proposed risk categorisation tool provides a mechanism for categorising risk for HMT operations.</p>
      </abstract>
      <kwd-group>
        <kwd>1 Human-machine teaming</kwd>
        <kwd>autonomy</kwd>
        <kwd>safety framework</kwd>
        <kwd>assurance</kwd>
        <kwd>risk</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        The origin of the word autonomy stems from the Greek words “auto” meaning self and “nomos”
meaning governance [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]; reflecting a notion of independence and personal authority [21]. [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] argues
that the term autonomy is often conveyed through two interpretations; one denoting self-sufficiency,
an ability to take care of oneself, and the other denoting self-directedness, freedom from outside
control. The differences in these interpretations have elicited multiple definitions attempting to
conceptualise autonomy. These efforts have been married with attempts to define levels of autonomy
(LOA) as a mechanism for categorising the varying capabilities of autonomous systems. [18] provides
an in depth literature review of the evolution of LOA over the last few decades.
      </p>
      <p>While many LOA taxonomies have been proposed over the years, none of these taxonomies are
specific to the application of human-machine teaming (HMT) [18]. [14] presents a framework for
adaptive automation processes for human-robot teaming. While the framework presents varying LOA
as a method for enhancing human-system performance, a taxonomy for categorising LOA for HMT is
not presented.</p>
      <p>There currently does not yet exist a globally agreed upon definition of HMT; however, the broader
literature defines HMT, often coined the term human-autonomy teaming, around the notion of sharing
authority to pursue common goals [12]. In the context of this research, HMT is defined as a
combination of human and machine capabilities working together towards an aligned goal [20].</p>
      <p>HMT operations have been actualised across a breadth of domains and applications, demonstrating
a range of machine capabilities - what the machine is capable of doing - and machine functions - the
role or purpose of the machine. LOA are an indication of machine capabilities as they describe the
degree to which a system is automated and what level of human intervention is required [17]. How</p>
      <p>© 2022 Copyright for this paper by its authors.</p>
      <p>Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).</p>
      <p>CEUR Workshop Proceedings (CEUR-WS.org)
risk is measured and how safety is assured for HMT operations will differ depending on the varying
capabilities that span across the spectrum that we call autonomy.</p>
      <p>
        Currently, robust mechanisms for assuring the safety of autonomous systems are lacking across
most industries. There exists a patchwork of safety standards around robot systems, most prominently
in the industrial sector. ISO 15066 [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] specifies safety requirements for collaborative industrial robot
systems, as described in ISO 10218-1 [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] and ISO 10218-2 [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ], that share the same workspace as
humans. ISO 15066 applies a heavy focus on controlling process parameters, such as speed and force,
to mitigate potential collisions. Enabling collision mitigation through controlling parameters arises as
a common mechanism within the literature around safety assurance of humans operating alongside
collaborative robots [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ][13].
      </p>
      <p>
        While standards such as ISO15066 “Robots and robotic devices — Collaborative robots” do exist
[
        <xref ref-type="bibr" rid="ref11">11</xref>
        ], these standards require systems to be physically separated from humans while operating. Given
the diversity of potential applications of HMT, segregating machines from humans may not always be
feasible. While established and standardised safety frameworks exist across many industries,
managing risks associated with autonomous systems introduces unique challenges. The breadth of
possible applications of autonomous technologies also introduces challenges for risk management, as
the diversity of use cases that come with different LOA and their inherent risks, can be difficult to
capture. Understanding levels of risk for different LOA will aid in determining proportionate safety
measures required for HMT operations
      </p>
    </sec>
    <sec id="sec-2">
      <title>2. Risk assessment and management</title>
      <p>
        Risk assessment and management is a core pillar of safety assurance for systems, autonomous or
otherwise. Established as a scientific field in the 1970s, the practice of risk assessment and
management has matured significantly over the proceeding years and is now used across most
industries [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
      </p>
      <p>
        [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] explore the links between facts and values in risk decision making; demonstrating that risk is
often connected with other issues that impact decision making; “decision making on traffic safety has
to be integrated with decision making on traffic planning as a whole, including issues such as travel
time, accessibility, environmental impact, costs, etc.” [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. When considering risk assessment and
management for HMT, the purpose of a system, what it is actually capable of in terms of autonomy,
what capacity there is for human intervention and what the human role is within the broader team are
fundamental points that need to be considered if risk is going to be assessed and managed
proportionally.
      </p>
      <p>
        Subjective probability is a common approach to managing uncertainty in risk assessments [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ].
Note, the reference to uncertainty here is at the operational level rather than at a systems level.
Uncertainty at the operational level can result from many factors, a common one being incomplete
information [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. How we understand and conceptualise autonomy within the context of HMT will
influence how we analyse risk. Categorising risk levels for HMT operations against LOA and machine
functions will facilitate a proportionate approach to risk assessment and management. The risk
categorisation matrix presented within this paper sits within a broader HMT safety framework, of
which is detailed in section 4, and provides a tool for identifying appropriate levels of risk for HMT
operations.
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. Risk categoriza1 on model</title>
      <p>
        A method for categorising HMT applications against machine capabilities, expressed through
LOA, and machine functions is proposed. Literature around LOA have propositioned taxonomies
specifying the degree to which a task is automated. While several taxonomies have been proposed [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]
[15][16][19], this research builds off the work of [17] which proposes ten LOA. While the proposed
levels were designed to be applicable to a “wide variety of domains and task types” [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], not all the
levels would be applicable to HMT. For the purpose of this research, the following four LOA were
identified as being applicable to the context of HMT.
      </p>
      <sec id="sec-3-1">
        <title>Both the human and the computer generate possible decision op3ons. The human s3ll retains full control over the selec3on of which op3on to implement; however, carrying out the ac3ons is shared between the human and the system.</title>
        <p>At this level, the computer generates a list of decision op3ons that it selects
from and carries out if the human consents. The human may approve of the
computer’s selected op3on or select one from among those generated by the
computer or the operator. The computer will then carry out the selected
ac3on. This level represents a higher level decision support system that is
capable of selec3ng among alterna3ves as well as implemen3ng the second
op3on.</p>
      </sec>
      <sec id="sec-3-2">
        <title>At this level, the system selects the best op3on to implement and carry out</title>
        <p>that ac3on, based upon a list of alterna3ves it generates (augmented by
alterna3ves suggested by the human operator). This system, therefore,
automates decision making in addi3on to the genera3on of op3ons (as with
decision support systems).</p>
      </sec>
      <sec id="sec-3-3">
        <title>At this level, the system carries out all ac3ons. The human is completely out of the control loop and cannot intervene. This level is representa3ve of a fully automated system where human processing is not deemed to be necessary.</title>
        <p>
          The four LOA detailed in Table 1 were chosen as they reflect a more balanced relationship
between human and machine. Each of the levels demonstrates less of a hierarchical structure and
more of a collaborative relationship with opportunities for negotiation between the entities. HMT is
characterised by a more balanced relationship between human and machine with greater levels of
negotiation [12]. This type of relationship requires increased machine capability, which is why the
lower LOA identified in [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ] were deemed not applicable to the given context.
        </p>
        <p>
          As machine capabilities cannot be isolated from machine functions, four machine functions were
have also been identified, Building on from [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ][15] further, the proposed LOA are considered
applicable to four machine functions that attempt to identify the role of the machine in a given
context. The four machine functions and what they encompass within the context of this framework
are detailed.
        </p>
      </sec>
      <sec id="sec-3-4">
        <title>Involves sending and registra3on of input data.</title>
      </sec>
      <sec id="sec-3-5">
        <title>Involves cogni3ve func3ons, such as processing informa3on or input data.</title>
      </sec>
      <sec id="sec-3-6">
        <title>Involves decision and ac3on selec3on.</title>
        <p>Implemen1ng</p>
      </sec>
      <sec id="sec-3-7">
        <title>Involves ac3on implementa3on.</title>
        <p>The four machine functions identified represent the possible functions or purpose of a machine
within HMT. The functions range from monitoring, which involves lower levels of decision making
on part of the machine, through to implementing, which implies implementing decision making with
or without human intervention.</p>
        <p>To situate HMT operations in the context of machine capability, expressed through LOA, and
machine functions, a categorisation matrix has been developed, and is depicted in Figure 1 below. The
matrix is a tool for categorising HMT operations against three risk categories to support proportionate
risk assessment and management of HMT operations.</p>
        <p>The matrix presented in Figure 1 illustrates three risk categories for HMT operations. Risk
category 1 encompasses capabilities that demonstrate lower levels of autonomy and greater levels of
human supervision. Risk category 2 encompasses capabilities that demonstrate greater levels of
autonomy that require less human supervision and risk category 3 encompasses capabilities that
demonstrate high levels of autonomy that involve minimal human supervision. Situating HMT
operations within these risk categories will ensure proportionate and effective safety assurance can be
demonstrated through the broader HMT safety framework.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. HMT safety framework</title>
      <p>The presented risk categorisation matrix sits within a broader HMT safety framework as a
mechanism for identifying appropriate levels of risk. The proposed broader framework will
demonstrate the safety assurance of both entities -human and machine - within HMT. In a teaming
context, the human role is less authoritative and more collaborative, as is demonstrated through
increased opportunities for negotiation between the two entities [12].</p>
      <p>Capturing all the broader risks that come with HMT can be challenging. As such, guiding
principles have been developed to help guide users with identifying the risks of HMT. The guiding
principles include:
●
●
●
●
●</p>
      <p>Adaptability - understanding the capacity to which the human and the machine can adapt to
their environment.</p>
      <p>Goal setting and goal actualisation - as HMT is defined by the pursuit of a shared goal, it is
necessary to understand how goals are determined and actualised for both humans and machines.</p>
      <p>Communication - understanding how, what, why and when information is communicated
between human and machine.</p>
      <p>Ethics - understanding the ethical implications of humans operating in close proximity to a
machine within specific environments.</p>
      <p>Trust - understanding how trust between the two entities influences decision making.</p>
      <p>The HMT safety framework will provide assurance of both entities, and in addition to addressing
physical safety, the framework will also include psychosocial considerations such as trust. The
framework will address how a system or capability operates in a specific environment and, more
importantly, how humans operate alongside these capabilities. The HMT safety framework will be
targeted at the implementation stage, with specific focus on user experience. It will act as a guiding
set of processes for users to follow to ensure the safe operation of humans alongside machines in a
teaming environment.</p>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusion and next steps</title>
      <p>Machine capabilities exist across a spectrum of autonomy. LOA applicable to HMT were identified
alongside machine capabilities. These factors are used to categorise HMT operations against three
levels of risk. Different machine capabilities and functions will yield different risks. The risks that
come with lower capabilities and functions, and thereby lower levels of uncertainty, will differ to the
risks that emerge from higher machine capabilities and functions that entail greater levels of
uncertainty. It follows that different risk analyses need to be applied to ensure proportionate measures
of safety are being implemented.</p>
      <p>The next stages of this research will include further development of the three risk analysis
categories. Each category will be developed against case study analyses across multiple industries to
ensure the outputs are applicable across a diversity of industries. The final output will be a
crosssector safety management framework for HMT.</p>
    </sec>
    <sec id="sec-6">
      <title>6. Acknowledgements</title>
      <p>The research for this paper received funding from the Australian Government through Trusted
Autonomous Systems, a Defence Cooperative Research Centre funded through the Next Generation
Technologies Fund.</p>
    </sec>
    <sec id="sec-7">
      <title>7. References</title>
      <p>12. Lyons, J. B., Sycara, K., Lewis, M., &amp; Capiola, A. (2021). Human–Autonomy Teaming:
Definitions, Debates, and Directions. Frontiers in Psychology, 12, 19–32. https://doi.org/
10.3389/fpsyg.2021.589585
13. Matthews, M., Chowdhary, G., &amp; Kieson, E. (2017). Intent Communication between Autonomous</p>
      <p>Vehicles and Pedestrians. https://arxiv.org/abs/1708.07123v1
14. Parasuraman, R., Barnes, M., &amp; Cosenzo, K. (2007). Adaptive Automation for Human-Robot</p>
      <p>Teaming in Future Command and Control Systems. The International C2 Journal, 1(2), 43–68.
15. Parasuraman, R., Sheridan, T., B., &amp; Wickens, C., D. (2000). A Model for Types and Levels of
Human Interaction with Automation. IEEE Transactions on Systems, Man, and Cybernetics
Part A: Systems and Humans, 30(3), 286–297.
16. Parker, J. (2021). The Challenges Posed by the Advent of Maritime Autonomous Surface Ships
for International Maritime Law. Australian and New Zealand Maritime Law Journal, 35(1), 31–
42.
17. Sheridan, T., B., &amp; Verplank, W., L. (1978). Human and Computer Control of Undersea
Teleoperators (Mechanical Engineering, Massachusetts Institute of Technology) [Technical
Report]. https://apps.dtic.mil/sti/pdfs/ADA057655.pdf
18. Vagia, M., Transeth, A. A., &amp; Fjerdingen, S. A. (2016). A literature review on the levels of
automation during the years. What are the different taxonomies that have been proposed? Applied
Ergonomics, 53, 190–202. http://dx.doi.org/10.1016/j.apergo.2015.09.013
19. Vine, R., &amp; Kohn, E. (2020). Concept for Robotic and Autonomous Systems V1.0. Joint Warfare</p>
      <p>Council. https://tasdcrc.com.au/wp-content/uploads/2020/12/ADF-Concept-Robotics.pdf
20. Walliser, James. C., de Visser, Ewart. J., &amp; Shaw, Tyler. H. (2019). Team Structure and Team
Building Improve HumanMachine Teaming With Autonomous Agents. Journal of Cognitive
Engineering and Decision Making, 13(4), 258–278. https://doi.org/10.1177/1555343419867563
21. Weinstein, N., Przybylski, Andrew. K., &amp; Ryan, Richard. M. (2012). The index of autonomous
functioning: Development of a scale of human autonomy. Journal of Research in Personality, 46,
397–413. http://dx.doi.org/10.1016/j.jrp.2012.03.007</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Aven</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          (
          <year>2016</year>
          ).
          <article-title>Risk assessment and risk management: Review of recent advances on their foundation</article-title>
          .
          <source>European Journal of Operational Research</source>
          ,
          <volume>253</volume>
          (
          <issue>1</issue>
          ),
          <fpage>1</fpage>
          -
          <lpage>13</lpage>
          . https://doi.org/10.1016/ j.ejor.
          <year>2015</year>
          .
          <volume>12</volume>
          .023
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Bradshaw</surname>
            ,
            <given-names>J. M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Feltovich</surname>
            ,
            <given-names>P. J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jung</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kulkarni</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Taysom</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Uszok</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          (
          <year>2004</year>
          ).
          <article-title>Dimensions of Adjustable Autonomy and Mixed-Initiative Interaction</article-title>
          .
          <source>In Agents and Computational Autonomy: Potential</source>
          , Risks, and
          <string-name>
            <surname>Solutions</surname>
          </string-name>
          (Vol.
          <volume>2969</volume>
          , pp.
          <fpage>17</fpage>
          -
          <lpage>39</lpage>
          ). SpringerVerlag.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Bradshaw</surname>
            ,
            <given-names>J. M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hoffman</surname>
            ,
            <given-names>R. R.</given-names>
          </string-name>
          , Johnson,
          <string-name>
            <given-names>M.</given-names>
            , &amp;
            <surname>Woods</surname>
          </string-name>
          ,
          <string-name>
            <surname>D. D.</surname>
          </string-name>
          (
          <year>2013</year>
          ).
          <source>The Seven Deadly Myths of “Autonomous Systems.” IEEE Computer Society</source>
          ,
          <fpage>1541</fpage>
          -
          <lpage>1672</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Clothier</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Williams</surname>
            ,
            <given-names>Brendan. P.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Perez</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          (
          <year>2019</year>
          ,
          <article-title>February)</article-title>
          .
          <article-title>Autonomy from a Safety Certification Perspective. 18th Australian International Aerospace Congress, Melbourne, A u s t r a l i a</article-title>
          . h t t p s : / / w w w.
          <article-title>r e s e a r c h g a t e</article-title>
          . n e t / p r o f i l e / R e e c e
          <article-title>- C l o t h i e r / p u b l i c a t i o n / 3 3 1 5 8 7 0 6 7 _ A u t o n o m y _ f r o m _ a _ S a f e t y _ C e r t i f i c a t i o n _ P e r s p e c t i v e / l i n k s / 5c81c6ce458515831f8f3571/Autonomy-from-a-</article-title>
          <string-name>
            <surname>Safety-</surname>
          </string-name>
          Certification-Perspective.pdf
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Dubois</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          (
          <year>2010</year>
          ).
          <article-title>Representation, Propagation, and Decision Issues in RiskAnalysis Under Incomplete Probabilistic Information</article-title>
          .
          <source>Risk Analysis</source>
          ,
          <volume>30</volume>
          (
          <issue>3</issue>
          ),
          <fpage>361</fpage>
          -
          <lpage>368</lpage>
          . https://doi.org/10.1111/ j.1539-
          <fpage>6924</fpage>
          .
          <year>2010</year>
          .
          <volume>01359</volume>
          .x
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Endsley</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            , &amp;
            <surname>Kaber</surname>
          </string-name>
          ,
          <string-name>
            <surname>D.</surname>
          </string-name>
          ,
          <string-name>
            <surname>B.</surname>
          </string-name>
          (
          <year>1999</year>
          ).
          <article-title>Level of automation effects on performance, situation awareness and workload in a dynamic control task</article-title>
          .
          <source>Ergonomics</source>
          ,
          <volume>42</volume>
          (
          <issue>3</issue>
          ),
          <fpage>462</fpage>
          -
          <lpage>492</lpage>
          . https://doi.org/ 10.1080/001401399185595
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Falconi</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sabattini</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Secchi</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fantuzzi</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Melchiorri</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          (
          <year>2014</year>
          ).
          <article-title>Edge-weighted consensus-based formation control strategy with collision avoidance</article-title>
          .
          <source>Robotica</source>
          ,
          <volume>33</volume>
          (
          <issue>2</issue>
          ),
          <fpage>332</fpage>
          -
          <lpage>347</lpage>
          . https://doi.org/10.1017/S0263574714000368
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Hansson</surname>
            ,
            <given-names>S. O.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Aven</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          (
          <year>2014</year>
          ).
          <article-title>Is risk analysis scientific? Risk Analysis</article-title>
          ,
          <volume>34</volume>
          (
          <issue>7</issue>
          ),
          <fpage>1173</fpage>
          -
          <lpage>1183</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>ISO</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          (
          <year>2011</year>
          ).
          <article-title>Robots and robotic devices-Safety requirements for industrial robots-Part 1: Robots</article-title>
          . International Organisation for Standards. https://www.iso.org/standard/51330.html
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10. ISO,
          <string-name>
            <surname>(</surname>
            <given-names>ISO</given-names>
          </string-name>
          ).
          <article-title>(</article-title>
          <year>2011</year>
          ).
          <article-title>Robots and robotic devices-Safety requirements for industrial robots-Part 2: Robot systems and integration</article-title>
          .
          <source>International Organisation for Standards</source>
          . https://www.iso.org/ standard/41571.html
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11. ISO,
          <string-name>
            <surname>(</surname>
            <given-names>ISO</given-names>
          </string-name>
          ).
          <article-title>(</article-title>
          <year>2016</year>
          ).
          <article-title>Robots and robotic devices-Collaborative robots</article-title>
          .
          <source>International Organisation for Standards</source>
          . https://www.iso.org/standard/62996.html
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>