<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>Barcelona, Catalunya, Spain, April</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Trustworthy “blackbox” Self-Adaptive Systems</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Beatriz Cabrero-Daniel</string-name>
          <email>beatriz.cabrero-daniel@gu.se</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Yasamin Fazelidehkordi</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Olga Ratushniak</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="editor">
          <string-name>Trustworthy AI, Human Oversight, Autonomous Vehicles.</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Gulden, A. Wohlgemuth, A. Hess</institution>
          ,
          <addr-line>S. Fricker, R. Guizzardi, J. Horkoff, A. Perini, A. Susi, O. Karras, A. Moreira, F</addr-line>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>In: A. Ferrari</institution>
          ,
          <addr-line>B. Penzenstadler, I. Hadar, S. Oyedeji, S. Abualhaija, A. Vogelsang, G. Deshpande, A. Rachmann, J</addr-line>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>University of Gothenburg</institution>
          ,
          <addr-line>Hörselgången 5, 417 56, Göteborg</addr-line>
          ,
          <country country="SE">Sweden</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2023</year>
      </pub-date>
      <volume>17</volume>
      <issue>2023</issue>
      <abstract>
        <p>For humans to trust Self-Adaptive Systems in critical situations, they must be robust, ethical, and lawful, but human intelligence is still needed to make ethical decisions. This paper presents a framework to discuss human values in the RE process for Self-Adaptive Systems and RE-specific challenges arising due to the AI paradigm shift towards foundation models: self-supervised blackboxes. Semi-autonomous heavy mining vehicles are a running example to present the requirements.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        There is much public discussion on how Artificial Intelligence (AI) difers from human
intelligence. We trust the latter, we are wary of the former. Industry practitioners share these concerns
and put efort into measuring safety, privacy, etc. Their goal is ensuring AI-based Self-Adaptive
Systems (SAS) can at least reach human performance in the tasks they were designed for [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
However, these eforts are often insuficient for humans to trust SASs, especially with the
introduction of foundation models, such as OpenAI’s ChatGPT, rapidly permeating society.
      </p>
      <p>
        Foundation models are based on large-scale self-supervised deep learning algorithms [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ],
whose inner workings are not transparent, making them dificult to explain to and interpret by
users. Moreover, foundation models often use large amounts of unlabelled data, often gathered
disregarding ethical concerns, e.g., diversity. The more complex and accurate the models become,
the more data is needed to train them, and the harder it is to explain their decision making
process. Thus, the conflict between these powerful AI “blackboxes” and user trust [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ].
      </p>
      <p>Requirements Engineering (RE) guidelines for ethical AI were reviewed with the aim of
building a framework for Trustworthy SASs (T-SASs). The outlined T-SAS framework is
motivated by the emergence of semi-autonomous heavy vehicles for mining, as running example,
which raise concerns addressed here. Nevertheless, the T-SAS framework could address human
values in other fields. The focus will be on</p>
      <p>
        human oversight, still needed to promote trust in
SASs [
        <xref ref-type="bibr" rid="ref4 ref5 ref6">4, 5, 6</xref>
        ]. The insights on human-on-the-loop (HOTL) expectations for T-SAS monitoring
and human intervention aim to foster discussions among the RE practitioners about creating
T-SASs that adhere to ethical principles and laws [
        <xref ref-type="bibr" rid="ref4 ref7">4, 7</xref>
        ].
      </p>
    </sec>
    <sec id="sec-2">
      <title>2. Background and Mining Context</title>
      <p>
        Aristotle defined credibility in terms of wisdom, virtue, and goodwill. Centuries later, EU
guidelines state that AI should be trustworthy, that is robust, lawful, and ethical [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. Fig.1
shows requirements related to human autonomy and shared responsibility in EU guidelines.
Evaluating whether adaptive systems meet stakeholders’ needs often focuses on robustness
verification, but this may not capture ethical values [
        <xref ref-type="bibr" rid="ref1 ref8 ref9">1, 8, 9</xref>
        ]. Nevertheless, embedding ethical
values in SASs is challenging, partly due to the recent AI developments such as foundation
models, e.g., text-to-image generators for non-expert users [
        <xref ref-type="bibr" rid="ref10 ref2">10, 2</xref>
        ].
      </p>
      <p>
        Designing comprehensive evaluation strategies for these complex and industrial systems
is dificult due to the lack of auditability and sustainability analysis, and the emergence of
unforeseen skills during training [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. Moreover, the lack of open APIs and benchmarks hinders
research on foundation models’ transparency, robustness, fairness, etc. Moreover, the resources
needed to train and test such systems hinder academics’ access to evaluating their benefits
and harms [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. Nevertheless, high-risk SASs like Autonomous Vehicles (AV), potentially using
foundation models, must nevertheless show transparency to allow for human oversight and
intervention [
        <xref ref-type="bibr" rid="ref3 ref4">3, 4</xref>
        ]. SASs must inform diverse end-users, e.g., end-users or third-party audits,
about their capacities and limitations and trace them back to input data to enable responsibility
reasoning [
        <xref ref-type="bibr" rid="ref11 ref12">11, 12</xref>
        ]. Responsibility sharing and mitigation of foreseeable misuse are challenging
and raise ethical questions that need to be answered during the RE process [
        <xref ref-type="bibr" rid="ref13 ref3">3, 13</xref>
        ].
      </p>
      <p>
        Mining AVs in safety-critical situations are high-risk AI products, therefore a HOTL to
monitor the AVs and intervene when prompted is needed [
        <xref ref-type="bibr" rid="ref3 ref4">3, 4</xref>
        ]. Human drivers and AVs
primarily rely on vision, or Computer Vision (CV), to avoid danger and their responsibilities
must be balanced [
        <xref ref-type="bibr" rid="ref14 ref15">14, 15</xref>
        ]. AI algorithms can help mining vehicles remote operators in critical
situations: by measuring user attention, either driver or remote operator, to reduce reaction
times or by facilitating fallback to human control in case of low AI confidence [
        <xref ref-type="bibr" rid="ref7">16, 7</xref>
        ]. Even
HOTL AVs can be involved in incidents, potentially fatal with heavy mining machinery, so risks
arising from faulty interactions must be mitigated. Human-AI interaction is receiving increasing
academic attention together with limitations of AV, including benefits, harms, and development
practices [
        <xref ref-type="bibr" rid="ref11">11, 17, 18</xref>
        ]. The AI paradigm is shifting to blackbox models, hindering HOTL-SAS
interaction and raising the question of how to split the responsibility of decision-making.
      </p>
      <p>
        Deep Learning algorithms are increasingly popular to detect edge cases where human
intervention might be needed, but they rely on large amounts of annotated data, which is dificult or
impossible to gather, expensive and time-consuming to curate [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. However, sensor dificulties,
e.g., extreme weather afecting visibility, or cognitive limits, e.g., insuficient training data,
might cause malfunctions [19]. The RE process therefore needs to set standards for data quality,
security, and privacy [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. Based on the data, robustness needs to be periodically evaluated by
stakeholders, using performance metrics and criteria that reflect their values and goals, e.g., ore
throughput rate [
        <xref ref-type="bibr" rid="ref1 ref10">1, 10</xref>
        ]. Limitations of mining AVs should be clearly explained to the HOTL
at all times, e.g., to prevent incidents, improve throughput rates, or audit accidents [
        <xref ref-type="bibr" rid="ref12 ref7">12, 7</xref>
        ].
Transparency, though, is not always possible when using these algorithms, especially in opaque
blackbox algorithms or foundation models.
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. Framework for Trustworthy Self-Adaptive Systems</title>
      <p>This section outlines a framework to guide the RE process for T-SAS focusing on requirements
for HOTL-mechanisms (see Figure 1) in light of the trend to incorporate foundation models
such as GPT-3, DALL-E, or BERT [20]. The relationship between the concepts is also discussed:</p>
      <p>
        Robustness. Classic AIs use annotated data, whilst foundation models use large volumes
of unlabeled data, removing the dificult and time-consuming task of curating data sets. This
paradigm can particularly benefit AVs for mining, which inherently need to deal with
previously unseen scenarios. Nevertheless foundation models, especially learning online, can be
afected by incorrect, redundant, or unstable data, which could lead to safety-critical situations.
Therefore, the T-SAS framework promotes the usage of high-quality, diverse, self-updating,
and self-augmenting data sets [
        <xref ref-type="bibr" rid="ref4">21, 4</xref>
        ]. Appropriate requirements for data availability, usability,
consistency, and integrity, must be discussed [
        <xref ref-type="bibr" rid="ref1 ref2">2, 1</xref>
        ].
      </p>
      <p>
        Human oversight. Whilst foundation models can accomplish complex tasks, e.g., image
synthesis, they still show limitations, e.g., generalizing to new scenes, mainly due to
selfsupervised training [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. Even if totally reliable, SASs incorporating such models would still
need to be transparent to facilitate human oversight, foster human autonomy, and, ultimately,
be trustworthy. HOTL-SAS interaction is an open and important problem for humans, who
should be able to supervise and override SAS decisions at all times. Therefore, T-SASs must
integrate HOTL strategies and monitoring interfaces adequate to the end-users, designed to
address the transparency and accountability needs of T-SASs [
        <xref ref-type="bibr" rid="ref10 ref7">17, 7, 10</xref>
        ].
      </p>
      <p>
        Transparency. T-SASs should provide concise, complete, correct, and clear explanations that
are relevant, accessible and comprehensible to users in a context (use or foreseeable misuse), to
avoid risks to health, safety, or fundamental rights [
        <xref ref-type="bibr" rid="ref3 ref4">4, 3</xref>
        ]. These requirements intend to ensure
human autonomy and responsibility sharing but integrating these needs into SAS is challenging.
Previous work has focused on highly trained operators, e.g., aircraft pilots, but there is still the
need to investigate how to design interactions with non-expert users [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]. Training end-users
while using SASs could be considered. For that, appropriate metrics and criteria, adapted to the
user and the operation context, would be needed to ensure clarity and avoid ambiguity about
the state of the T-SAS.
      </p>
      <p>
        Accountability. As discussed above, many SASs, including AV, cannot ensure safety on their
own and need to be monitored by humans during operation. Even when SASs are not entirely
robust, might be able to produce priors and convey information that greatly helps the HOTL in
critical situations. This has long been a focus of Human-Computer Interaction research [
        <xref ref-type="bibr" rid="ref3 ref7">3, 17, 7</xref>
        ].
Moreover, T-SASs must also be accountable to justify their goals, motivations and rationale
in post hoc analysis by third parties. This topic is strongly related to detecting, leveraging,
and mitigating risks by public authorities. Therefore, the framework should explicitly connect
these needs to open communication requirements, critical for T-SASs that closely interact with
humans, e.g., AV drivers [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ].
      </p>
    </sec>
    <sec id="sec-4">
      <title>4. Conclusion</title>
      <p>
        Humans often mistrust SASs or show automation bias [
        <xref ref-type="bibr" rid="ref11 ref3">3, 11</xref>
        ]. Both are concerning as SASs
increasingly integrate foundation models, far from being transparent or auditable [
        <xref ref-type="bibr" rid="ref2 ref6">20, 6, 22, 2</xref>
        ].
Much efort has been devoted to support practitioners in addressing human values in the RE
process but the absence of clear guidelines, benchmarks, metrics, and evaluation criteria, makes
this task challenging. As a result, there is still a need for human oversight, e.g., fallback
procedures [
        <xref ref-type="bibr" rid="ref11">11, 17, 16</xref>
        ]. Academics from diferent backgrounds should examine the models’ biases
and limitations, and inform society about their trustworthiness [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. These recommendations
are based on existing international laws, domestic legislation, and AI development frameworks
and aim to increase awareness among RE practitioners and inspire the development of a generic
framework for creating T-SASs.
      </p>
      <p>Eforts to homogenise mining processes are already being made but further research is
needed to adequately address human values in HOTL mining SASs. For instance, it is necessary
to consider the implications that foundation models will entail with respect to other ethical
considerations. Agreeing on appropriate recommendations with practitioners to address human
values in the RE process for T-SAS would be a necessary next step. Frameworks from other
disciplines and the ad-hoc practices of RE practitioners could be studied to propose adaptations
to existing frameworks to better address human values in T-SAS development. Data governance
should in turn be aligned with stakeholders’ values, e.g., non-discrimination, and requirements
such as privacy or fairness. These considerations are left out for future work.</p>
      <p>This work is based on European Union guidelines but diferent values might prevail in non-EU
countries. Even within the EU, revisions to the AI legislation, which is still in draft form, might
have a significant impact on the SAS now in development. As such, it is important for the
framework to adapt to new, unforeseeable trust elements introduced by public authorities that
might, directly and indirectly, impact the expectations for T-SASs. As a final note, future research
must also address the question of how to allow for diverse legislation and context-dependent
interpretation of T-SAS requirements.</p>
    </sec>
    <sec id="sec-5">
      <title>Acknowledgments</title>
      <p>This work is thanks to the University of Gothenburg’s Amanuens program. Thanks to Prof.
Berger and Assoc. Prof. Horkof for their valuable guidance. This work was supported by the
Vinnova project ASPECT [2021-04347].
Conference on Automatic Face and Gesture Recognition, FG 2020 (2020) 8–15. doi:10.1109/
FG47880.2020.00014.
[16] I. Kotseruba, J. Tsotsos, Attention for vision-based assistive and automated driving: A
review of algorithms and datasets, IEEE Transactions on Intelligent Transportation Systems
(2022) 1–22. doi:10.1109/TITS.2022.3186613.
[17] C. Mutzenich, S. Durant, S. Helman, P. Dalton, Updating our understanding of situation
awareness in relation to remote operators of autonomous vehicles, Cognitive Research:
Principles and Implications 6 (2021) 1–17. doi:10.1186/S41235-021-00271-8/FIGURES/6.
[18] N. Hutchins, Z. Kirkendoll, L. Hook, Social impacts of ethical artifical intelligence and
autonomous system design, 2017 IEEE International Symposium on Systems Engineering,
ISSE 2017 - Proceedings (2017). doi:10.1109/SYSENG.2017.8088298.
[19] M. Andriluka, L. Pishchulin, P. Gehler, B. Schiele, 2d human pose estimation: New
benchmark and state of the art analysis, Proceedings of the IEEE Computer Society
Conference on Computer Vision and Pattern Recognition (2014) 3686–3693. doi:10.1109/
CVPR.2014.471.
[20] J. Devlin, M.-W. Chang, K. Lee, K. Toutanova, Bert: Pre-training of deep bidirectional
transformers for language understanding (2018). URL: https://arxiv.org/abs/1810.04805.
doi:10.48550/ARXIV.1810.04805.
[21] M. Borg, H.-M. Heyn, J. Horkof, K. M. Habibullah, A. Knauss, E. Knauss,
P. J. Li, Precog: Requirements Engineering toward Safe Machine
LearningBased Perception Systems for Autonomous Mobility | Vinnova, 2021. URL:
https://www.vinnova.se/en/p/precog-requirements-engineering-toward-safe-machinelearning-based-perception-systems-for-autonomous-mobility/.
[22] F. Poursabzi-Sangdeh, D. G. Goldstein, J. M. Hofman, Manipulating and measuring model
interpretability, Conference on Human Factors in Computing Systems - Proceedings (2021).
doi:10.1145/3411764.3445315.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>D. M.</given-names>
            <surname>Berry</surname>
          </string-name>
          ,
          <article-title>Requirements engineering for artificial intelligence: What is a requirements specification for an artificial intelligence?</article-title>
          , volume
          <volume>13216</volume>
          LNCS, Springer Science and Business Media Deutschland GmbH,
          <year>2022</year>
          , pp.
          <fpage>19</fpage>
          -
          <lpage>25</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>030</fpage>
          -98464- 9\_2.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>R.</given-names>
            <surname>Bommasani</surname>
          </string-name>
          , et al.,
          <source>On the opportunities and risks of foundation models</source>
          ,
          <year>2021</year>
          . URL: https://arxiv.org/abs/2108.07258. doi:
          <volume>10</volume>
          .48550/ARXIV.2108.07258.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>EUR-Lex -</surname>
          </string-name>
          52021PC0206
          <string-name>
            <surname>- EN - EUR-Lex</surname>
          </string-name>
          ,
          <year>2021</year>
          . URL: https://eur-lex.europa.eu/legal-content/ EN/TXT/?uri=CELEX:
          <fpage>52021PC0206</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>European</given-names>
            <surname>Commission</surname>
          </string-name>
          and
          <article-title>Directorate-General for Communications Networks, Content and Technology, The Assessment List for Trustworthy Artificial Intelligence (ALTAI) for self assessment</article-title>
          ,
          <source>Publications Ofice</source>
          ,
          <year>2020</year>
          . doi: doi/10.2759/002360.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>R.</given-names>
            <surname>Calinescu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Weyns</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Gerasimou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Iftikhar</surname>
          </string-name>
          , I. Habli, T. Kelly,
          <article-title>Entrust: engineering trustworthy self-adaptive software with dynamic assurance cases</article-title>
          ,
          <year>2018</year>
          , pp.
          <fpage>495</fpage>
          -
          <lpage>495</lpage>
          . doi:
          <volume>10</volume>
          .1145/3180155.3182540.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>M.</given-names>
            <surname>Rahimi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. L.</given-names>
            <surname>Guo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Kokaly</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Chechik</surname>
          </string-name>
          ,
          <article-title>Toward requirements specification for machine-learned components</article-title>
          , IEEE,
          <year>2019</year>
          , pp.
          <fpage>241</fpage>
          -
          <lpage>244</lpage>
          . doi:
          <volume>10</volume>
          .1109/REW.
          <year>2019</year>
          .
          <volume>00049</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>J.</given-names>
            <surname>Dimatteo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. M.</given-names>
            <surname>Berry</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Czarnecki</surname>
          </string-name>
          ,
          <article-title>Requirements for monitoring inattention of the responsible human in an autonomous vehicle: The recall and precision tradeof (</article-title>
          <year>2020</year>
          ). URL: https://ceur-ws.
          <source>org/</source>
          Vol-
          <volume>2584</volume>
          /
          <fpage>RE4AI</fpage>
          -paper2.
          <fpage>pdf</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>E.</given-names>
            <surname>Halme</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Agbese</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Antikainen</surname>
          </string-name>
          , H.
          <article-title>-</article-title>
          <string-name>
            <surname>K. Alanen</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Jantunen</surname>
            ,
            <given-names>A. A.</given-names>
          </string-name>
          <string-name>
            <surname>Khan</surname>
          </string-name>
          ,
          <string-name>
            <surname>K.-K. Kemell</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          <string-name>
            <surname>Vakkuri</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Abrahamsson</surname>
          </string-name>
          ,
          <article-title>Ethical user stories: Industrial study (</article-title>
          <year>2022</year>
          ). URL: http: //ceur-ws.
          <source>org.</source>
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>F. B.</given-names>
            <surname>Aydemir</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Dalpiaz</surname>
          </string-name>
          ,
          <article-title>A roadmap for ethics-aware software engineering (</article-title>
          <year>2018</year>
          ). URL: https://doi.org/10.1145/3194770.3194778. doi:
          <volume>10</volume>
          .1145/3194770.3194778.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>I. Ozkaya</surname>
          </string-name>
          ,
          <article-title>Ethics is a software design concern</article-title>
          ,
          <source>IEEE Software 36</source>
          (
          <year>2019</year>
          )
          <fpage>4</fpage>
          -
          <lpage>8</lpage>
          . doi:
          <volume>10</volume>
          .1109/ MS.
          <year>2019</year>
          .
          <volume>2902592</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>A.-Q. V.</given-names>
            <surname>Dao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. L.</given-names>
            <surname>Brandt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Battiste</surname>
          </string-name>
          ,
          <string-name>
            <surname>K.-P. L. Vu</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          <string-name>
            <surname>Strybel</surname>
          </string-name>
          , W. W. Johnson,
          <article-title>The impact of automation assisted aircraft separation on situation awareness</article-title>
          , in: G. Salvendy,
          <string-name>
            <given-names>M. J.</given-names>
            <surname>Smith</surname>
          </string-name>
          (Eds.),
          <source>Human Interface and the Management of Information. Information and Interaction</source>
          , Springer Berlin Heidelberg, Berlin, Heidelberg,
          <year>2009</year>
          , pp.
          <fpage>738</fpage>
          -
          <lpage>747</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>N. E.</given-names>
            <surname>Gold</surname>
          </string-name>
          , Virginia dignum:
          <article-title>Responsible artificial intelligence: How to develop and use ai in a responsible way</article-title>
          ,
          <source>Genetic Programming and Evolvable Machines 2020 22:1</source>
          <volume>22</volume>
          (
          <year>2020</year>
          )
          <fpage>137</fpage>
          -
          <lpage>139</lpage>
          . URL: https://link.springer.com/article/10.1007/s10710-020-09394-1. doi:
          <volume>10</volume>
          .1007/ S10710-020-09394-1.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>F.</given-names>
            <surname>Doshi-Velez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Kortz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Budish</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Bavitz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Gershman</surname>
          </string-name>
          ,
          <string-name>
            <surname>D. O'Brien</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          <string-name>
            <surname>Scott</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Schieber</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Waldo</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Weinberger</surname>
          </string-name>
          , et al.,
          <article-title>Accountability of ai under the law: The role of explanation</article-title>
          ,
          <source>arXiv preprint arXiv:1711.01134</source>
          (
          <year>2017</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Tian</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>He</surname>
          </string-name>
          ,
          <article-title>Monocular human pose estimation: A survey of deep learning-based methods</article-title>
          ,
          <source>Computer Vision and Image Understanding</source>
          <volume>192</volume>
          (
          <year>2020</year>
          )
          <article-title>102897</article-title>
          . doi:
          <volume>10</volume>
          .1016/J.CVIU.
          <year>2019</year>
          .
          <volume>102897</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>A.</given-names>
            <surname>Bulat</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Kossaifi</surname>
          </string-name>
          , G. Tzimiropoulos,
          <string-name>
            <given-names>M.</given-names>
            <surname>Pantic</surname>
          </string-name>
          ,
          <article-title>Toward fast and accurate human pose estimation via soft-gated skip connections</article-title>
          ,
          <source>Proceedings - 2020</source>
          15th IEEE International
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>