<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>Arenzano
(Genoa), Italy, June</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>The Human-Centered Approach to Design and Evaluate Symbiotic AI Systems</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Miriana Calvano</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Antonio Curci</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Rosa Lanzilotti</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Antonio Piccinno</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>University of Bari Aldo Moro</institution>
          ,
          <addr-line>Via Edoardo Orabona 4, 70125, Bari</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>University of Pisa</institution>
          ,
          <addr-line>Largo B. Pontecorvo 3, 56127, Pisa</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2022</year>
      </pub-date>
      <volume>03</volume>
      <issue>2024</issue>
      <fpage>0000</fpage>
      <lpage>0002</lpage>
      <abstract>
        <p>Artificial Intelligence (AI) is spreading in many domains, revolutionizing the way individuals conceive their working and private life; it enhances many tasks by automating decision-making and augmenting human capabilities. It is necessary to design high-quality AI systems that focus on the users' priorities and avoid potential unethical and unexpected behaviours. The widespread adoption of AI solutions faces challenges related to their transparency, since humans must be enabled to fully understand the outputs of such systems in order to make informed decisions. To address these concerns, a shift toward a human-centered approach is emerging when it comes to interact with AI systems. In this new scenario, Human-Computer Interaction (HCI) plays a pivotal role and contaminates AI to reach the human-AI symbiosis. Designers and developers should gravitate towards Symbiotic AI (SAI), which has the goal to support humans without replacing them and establish a symbiotic relationship with users, adapting to their cognitive models. This contribution aims to present a proposal of a framework to design high-quality SAI systems and metrics that can be employed to appropriately evaluate them. Opportunities and challenges that characterize this new research context are also presented and discussed.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Symbiotic AI</kwd>
        <kwd>Human-Centered Design</kwd>
        <kwd>Design</kwd>
        <kwd>Evaluation</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        The technological advancement of modern society has introduced Artificial Intelligence (AI) in multiple
ifelds (e.g., medicine, transportation, education), bringing innovation with new services and products
that can boost productivity and reduce the demand for repetitive tasks in terms of time and resources
[
        <xref ref-type="bibr" rid="ref1 ref2">1, 2</xref>
        ]. However, the current scenario concerning AI is raising questions about important ethical and
legal factors because it must be used responsibly, avoiding potential misuse, biases, and infringement of
human rights. In this regard, new guidelines and regulations are emerging to ensure a responsible and
correct design, development, and use of systems that feature AI, prioritizing human well-being and
societal values.
      </p>
      <p>
        Professionals in diverse domains broadly use AI to make decisions that often lead to irreversible
consequences [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. Such issues must be addressed when creating AI systems; they lie in the fact that
end users are not computer scientists and cannot fully comprehend the processes that lead to outputs
from a technical point of view. Designers and developers must consider users’ cognitive models, skills,
and needs to create AI systems that establish a symbiotic relationship between humans and machines.
This concept plays a pivotal role in the introduction of Symbiotic AI (SAI). This expression refers to
AI systems that enhance and support humans in performing their activities without replacing them,
adapting to their mental and physical models, allowing them to make informed decisions and avoid
negative feelings [
        <xref ref-type="bibr" rid="ref4 ref5">4, 5</xref>
        ]. For example, a physician can rely on an AI-powered system that classifies
patients as ill or healthy based on MRI scans; at the same time, they must be enabled to correctly interpret
the system’s output and to comprehend the motivations that generated the response to make informed
decisions [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. The human-AI interaction must be enhanced and improved by recognising humans’
responses and emotions, embracing their needs and adapting to their behaviors [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. Therefore, humans
should play a central role throughout the design of SAI systems, employing the Human-Centered Design
(HCD) methodology while following the processes and techniques that belong to the agile development
of Software Engineering (SE) [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. This approach aims at merging and linking disciplines, such as
Law, Human-Computer Interaction (HCI) and Software Engineering (SE), boosting the productivity of
designers and developers while improving users’ experience when interacting with such systems.
      </p>
      <p>This contribution presents a proposal of a novel approach to design and evaluate SAI systems; it
illustrates a conceptual multidisciplinary framework and preliminary considerations about metrics to
assess the desired properties of such systems (e.g., trustworthiness, safety, reliability, etc.).</p>
      <p>This paper is structured as follows: Section 2 describes the motivations of this research work; Section
3 presents its main goals and research questions; Section 4 illustrates the proposal of a comprehensive
conceptual framework; Section 5 presents preliminary considerations about metrics for the evaluation
of SAI systems and Section 6 concludes and details the future steps of this research.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Research Motivations</title>
      <p>
        Understanding AI models is a complex and challenging task for humans because of their black-box
nature, such as deep learning models which are characterized by complex mathematical operations
involving millions or billions of parameters. They can recognize intricate and nonlinear patterns, which
are often highly dificult for humans to interpret how specific inputs lead to particular outputs [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ].
This issue feeds the need for Transparency, in which Explainability and Interpretability are reinforced:
in transparent AI models, adequate explanations are provided to users about the processes that were
employed to produce specific outputs, making them interpretable by humans, which implies that
they can map the abstract concepts to something that they can make sense of [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. Understanding the
motivations behind specific outputs is crucial to guarantee humans the right level of control while
balancing automation because, even if it can increase eficiency, users should be allowed to intervene
and control the system’s performance when appropriate [
        <xref ref-type="bibr" rid="ref10 ref11 ref12">10, 11, 12</xref>
        ].
      </p>
      <p>
        This research work emphasizes that creating AI systems not only involves mere technicalities but
also ethical, legal, and anthropological dimensions. For instance, Kieseberg et al. state that "AI possess
three key characteristics throughout its entire life-cycle: Lawfulness, adherence to ethical principles and
technological, as well as, social robustness" [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. In this regard, the main topics that are worth researching
for this study are represented in the diagram shown in Figure 1. Specifically, Technicalities ensure that
through the human-AI symbiosis, the user’s cognitive abilities are enhanced while guaranteeing the
right level of control of the system; Ethical Aspects are focused on ethical concerns in SAI systems;
Human Factors highlight the need for a usable and explainable AI to allow the user to make informed
decisions and to easily comprehend the system’s performance [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]. It is important to underline that
these three dimensions are also related to each other due to the interdisciplinary nature of the field.
Thus, it would be possible to design artefacts that, being governance-compliant and fair, can guarantee
human-driven decision-making.
      </p>
      <p>
        In this regard, the European Union (EU) has formalized the factors concerning AI by releasing the
Artificial Intelligence Act (AIA), which delineates a regulation with respect to the design, development,
and employment of AI through a risk-based approach [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ]. Similarly to the General Data Protection
Regulation (GDPR) for privacy and security, the soft law is changing the future of AI in the Union and
in the rest of the world, highlighting that users must be always preserved and protected. The GDPR
governs how data in the EU is stored, processed, and transferred, implying that designers and developers
must necessarily comply with this regulation to create systems that can be appropriately used by end
users [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ]. It is important to consider the users’ characteristics since integrating information about
their mental model from the training phase of AI models can lead to the creation of non-biased and
ethically compliant SAI systems.
      </p>
      <p>Establishing best practices and guidelines for designers and developers to create AI systems that
foster symbiosis must start with defining principles that act as the leitmotif of the research in this
context.</p>
      <p>Technicalities
Augmenting Enabling
human ability human control
Ethical and</p>
      <p>Fair AI</p>
      <p>Usable and</p>
      <p>Explainable AI</p>
      <p>User</p>
      <p>Human-driven decision making
Ethical Aspects</p>
      <p>Human Factors</p>
    </sec>
    <sec id="sec-3">
      <title>3. Objectives and Research Questions</title>
      <p>Based on the topics explored in Section 1 and 2, the overall objectives of this research are listed below:
• Create a comprehensive framework tailored to the design of SAI systems that encompasses AI,</p>
      <p>HCI, SE, Law, and Ethics.
• Define new interaction paradigms and new user interfaces that align with SAI requirements.
• Design transparent models that allow users to understand the behavior and make informed
decisions.
• Delineate an evaluation framework and metrics to assess the human-AI symbiosis considering its
desired properties (e.g., trustworthiness, safety, and reliability). This framework will provide an
instrument to assess the efectiveness of the human-AI relationship in SAI systems.</p>
      <p>This contribution represents a starting point to achieve the research goal, which is guided by the
following research questions:
(RQ1) How can the methodologies of HCI be integrated into the processes that belong to SE to develop</p>
      <p>SAI systems?
(RQ2) How can the legal and ethical requirements concerning AI be integrated into a framework for the
development of compliant SAI systems?
(RQ3) How the current challenges in conventional metrics for evaluating SAI systems can be faced to
assess the human-AI symbiosis?</p>
    </sec>
    <sec id="sec-4">
      <title>4. Comprehensive Framework</title>
      <p>Creating SAI systems can be a challenging objective and must be addressed through a multidisciplinary
approach, encompassing diverse domains that range from Computer Science to Law. Figure 2 presents
a conceptual version of the comprehensive framework that embraces four main research areas:
HumanComputer Interaction (HCI), Law &amp; Ethics, Software Engineering (SE), and Artificial Intelligence.
Although these disciplines are characterized by their own principles, guidelines, and techniques, this
framework aims to define the connections and influences among them and find the links that reinforce
human-machine symbiosis. It is underlined that the domains involved in this research are all relevant
and they equally contribute to the achievement of the goal.</p>
      <p>
        The following sections describe each component of the framework, illustrating its role in the SAI
scenario.
Human-Computer Interaction (HCI) It is the bridge that stands between the technical side of
Computer Science and the human studies of Psychology. It is the discipline that stresses how crucial
end users are, defining methodologies and techniques that must be employed when designing any
kind of product. Developing usable AI systems is key to establishing a symbiotic relationship because
they must allow users to reach their goals with efectiveness, eficiency, and satisfaction. Thus, they
need to provide useful feedback, they must be afordable, and deliver gratification to users during
their interaction with the system [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ]. It emerges that the Human-Centered Design (HCD) is the core
of HCI and addresses the complete and continuous involvement of humans by employing diferent
techniques, such as interviews, questionnaires, field studies, and focus groups, providing quantitative
and qualitative data to obtain rich insights about the users’ needs, preferences, behaviors, and cognitive
models [
        <xref ref-type="bibr" rid="ref16 ref7">16, 7</xref>
        ].
      </p>
      <p>
        Law and Ethics They represent one of the four pillars of this framework, encompassing the factors
that influence the creation of AI systems from a regulatory, philosophical, and ethical standpoint.
Designers and developers must be aware of these regulations in order to create compliant products that
preserve users’ social, working, and personal well-being. From a legal standpoint, the main elements to
consider are the AIA and the GDPR. These regulations define the ethical principles that any kind of
system should possess in order to be available to society [
        <xref ref-type="bibr" rid="ref14 ref15">14, 15</xref>
        ].
      </p>
      <p>
        Software Engineering (SE) It defines how software is created through standardized methodologies
[
        <xref ref-type="bibr" rid="ref17">17</xref>
        ]. This framework aims at guiding design and developers in the creation of SAI, ensuring that
they operate following a human-centered approach, while complying with the legal requirements
and implementing high-performing AI systems [18]. Therefore, the objective is to integrate the Agile
principles and the processes of the Agile Development Lifecycle with those belonging to the SAI design,
creating a mapping that does not exclude any discipline.
      </p>
      <p>Artificial Intelligence This dimension is related to technical elements of the design of SAI systems
by suggesting techniques and standards in relation to the actual implementation of AI models. The
models can be applied in various real-world domains - e.g., business, finance, healthcare, agriculture,
smart cities, and cybersecurity. Depending on the activities that have to be performed, diferent models
can be employed, along with diferent AI tasks, such classification, prediction, description [ 19]. Other
than the mere metrics for the evaluation of AI models, this framework revolves around Transparency; it
embraces other techniques, such as Explainability and Interpretability and has the goal of providing
insight into how models work, why specific decisions are made, and what data is used to reach those
decisions.</p>
      <p>These disciplines are intrinsically intertwined because HCI serves to create interactive solutions
that are intuitive, accessible, and usable. Beyond usability, considerations of legality and ethics are
crucial to guide the development process to ensure compliance with regulations and adherence to
ethical principles, safeguarding humans. The process has to be carried out following standardized SE
practices to build robust architecture and ensure a reliable implementation of systems powered by AI.
Such systems not only enhance human capabilities but also foster symbiotic relationships between
humans and technology, redefining how we interact with AI.</p>
    </sec>
    <sec id="sec-5">
      <title>5. Metrics for Evaluating SAI</title>
      <p>User behavior and AI system performances have been considered unrelated aspects to analyse and
evaluate [20]. This point of view led to the definition of metrics that separately evaluate user-side and
AI-side (i.e. User Experience (UX) and AI metrics).</p>
      <p>
        In this new scenario, where there is a strict correlation between human and AI performances, it is
strictly necessary to define novel metrics that are able to evaluate both user and AI performance and,
consequently, the human-AI symbiotic relationship [21, 22]. These metrics should revolve around the
principle of trustworthiness which ensures that SAI systems can be trusted, operate safely, and exhibit
reliable behavior [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ].
      </p>
      <p>The starting point for the definition of these metrics is considering the gaps in the traditional approach
in which users and AI systems are evaluated separately. As shown in Figure 3, these two dimensions
must contaminate in order to properly assess the symbiotic relationship. Considering both UX metrics
to evaluate human behavior and AI metrics to measure the AI model performance, it is possible to obtain
an initial definition of metrics able to assess the human-AI symbiosis. For instance, in this scenario, it is
important to consider not only information about the dataset but also about the user’s mental model
and cognitive abilities [23].</p>
      <p>Preliminary considerations about novel metrics are presented below based on known research
available in the literature.</p>
      <p>• Trustworthiness can be employed to evaluate how much through the human-AI symbiotic
relationship, it is possible to enhance user trust. In particular, this can measure the level of user trust
level in terms of prevention from undesired system behaviors and the correctness of decisions
taken. Trustworthiness can be achieved if an AI system possesses other several properties, for
example, safety, fairness, sustainability, etc 1.
• Interaction Enhancement (IE) is a novel metric that can be defined to evaluate to which extent
user cognitive abilities can be enhanced through SAI systems. In this way, whether the human-AI
strict collaboration brings benefit can be assessed considering the efort employed by the user
during the interaction process.
1https://ec.europa.eu/futurium/en/ai-alliance-consultation.1.html</p>
      <p>In the end, to understand if the proposed solutions are valid and how they can be improved, they will
be assessed through user studies to understand if they can be correctly applied in real-world contexts.
They will be used to obtain insights into the efectiveness of proposed metrics and the SAI system’s
user interface.</p>
    </sec>
    <sec id="sec-6">
      <title>6. Conclusions and Future Work</title>
      <p>This contribution presents the current challenges and opportunities that characterize this new scenario
concerning the interaction between humans and AI.</p>
      <p>SAI systems have the objective to enhance the user’s cognitive abilities and to guarantee the right
balance between human control and AI system automation. Being a new field of research with specific
demands and delicate requirements, it is important to determine principles, guidelines, and practices to
guide designers and developers in the process of creating AI systems that comply with regulations and
human needs. Undertaking a methodological and empirical approach in this context also involves the
proper evaluation of these systems [24].</p>
      <p>This paper presents a proposal for a framework to lead computer scientists in the creation of SAI
systems and preliminary considerations about an assessment strategy based on novel metrics that
measure the extent to which such systems are trustworthy and compliant with the denfied guidelines.
In this regard, preliminary ideas about how to address the existing challenges are proposed which will
be further revised and validated.</p>
      <p>The future work of this research concerns the formal definition of the framework and metrics in
question and their validity assessment. In this process, end-users will be involved to have direct feedback
and suggestions. Real-case scenarios and user studies will be used to evaluate the entire methodology,
examining how the framework performs when used by designers and developers and determining the
efectiveness of metrics when employed for the evaluation of existing SAI systems.</p>
    </sec>
    <sec id="sec-7">
      <title>Acknowledgments</title>
      <p>The research of Miriana Calvano and Antonio Curci is supported by the co-funding of the European
Union - Next Generation EU: NRRP Initiative, Mission 4, Component 2, Investment 1.3 – Partnerships
extended to universities, research centers, companies, and research D.D. MUR n. 341 del 15.03.2022 – Next
Generation EU (PE0000013 – “Future Artificial Intelligence Research – FAIR” - CUP: H97G22000210007).
[18] D. Salah, R. F. Paige, P. Cairns, A systematic literature review for agile development processes and
user centred design integration, in: Proceedings of the 18th International Conference on Evaluation
and Assessment in Software Engineering, ACM, London England United Kingdom, 2014, pp. 1–10.</p>
      <p>URL: https://dl.acm.org/doi/10.1145/2601248.2601276. doi:10.1145/2601248.2601276.
[19] I. Sarker, Ai-based modeling: Techniques, applications and research issues towards automation,
intelligent and smart systems, 2022. doi:10.20944/preprints202202.0001.v1.
[20] V. S. Barletta, F. Cassano, A. Pagano, A. Piccinno, New perspectives for cyber security in
software development: when end-user development meets artificial intelligence, 2022, p.
531 – 534. URL: https://www.scopus.com/inward/record.uri?eid=2-s2.0-85146434281&amp;doi=10.
1109%2f3ICT56508.2022.9990622&amp;partnerID=40&amp;md5=8f8c310fdc8f750cd071562bef2f23be. doi:10.
1109/3ICT56508.2022.9990622.
[21] S. Ruhela, Thematic correlation of human cognition and artificial intelligence, 2019 Amity
International Conference on Artificial Intelligence (AICAI) (2019) 367–370. URL: https://api.
semanticscholar.org/CorpusID:139165389.
[22] V. S. Barletta, F. Cassano, A. Pagano, A. Piccinno, A collaborative ai dataset creation for speech
therapies, volume 3136, 2022, p. 81 – 85. URL: https://www.scopus.com/inward/record.uri?eid=
2-s2.0-85130701005&amp;partnerID=40&amp;md5=e5349d9fedc09b849dc4fc1c8debf3be, cited by: 7.
[23] J. E. Block, E. D. Ragan, Micro-entries: Encouraging deeper evaluation of mental models over time
for interactive data systems, 2020 IEEE Workshop on Evaluation and Beyond - Methodological
Approaches to Visualization (BELIV) (2020) 38–47. URL: https://api.semanticscholar.org/CorpusID:
221677042.
[24] R. Conradi, A. I. Wang, ESERNET (Eds.), Empirical methods and studies in software engineering:
experiences from ESERNET, number 2765 in Lecture notes in computer science, 1 ed., Springer,
Berlin; New York, 2003. URL: https://doi.org/10.1007/b11962.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>M.</given-names>
            <surname>Holt</surname>
          </string-name>
          ,
          <source>Artificial intelligence in modern society</source>
          ,
          <year>2018</year>
          . URL: https://api.semanticscholar.org/ CorpusID:115683678.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>I. M.</given-names>
            <surname>Cockburn</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Henderson</surname>
          </string-name>
          ,
          <string-name>
            <surname>S. Stern,</surname>
          </string-name>
          <article-title>The impact of artificial intelligence on innovation</article-title>
          ,
          <source>IRPN: Innovation &amp; Cyberlaw &amp; Policy (Topic)</source>
          (
          <year>2018</year>
          ). URL: https://api.semanticscholar.org/CorpusID: 91169962.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>S.</given-names>
            <surname>Andolina</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. A.</given-names>
            <surname>Konstan</surname>
          </string-name>
          ,
          <article-title>Introduction to the special issue on ai, decision-making, and the impact on humans</article-title>
          ,
          <source>International Journal of Human-Computer Interaction</source>
          <volume>39</volume>
          (
          <year>2023</year>
          )
          <fpage>1367</fpage>
          -
          <lpage>1370</lpage>
          . URL: https://api.semanticscholar.org/CorpusID:256892177.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>S. S.</given-names>
            <surname>Grigsby</surname>
          </string-name>
          ,
          <article-title>Artificial intelligence for advanced human-machine symbiosis</article-title>
          , in: D. D.
          <string-name>
            <surname>Schmorrow</surname>
          </string-name>
          ,
          <string-name>
            <surname>C. M. Fidopiastis</surname>
          </string-name>
          (Eds.),
          <source>Augmented Cognition: Intelligent Technologies</source>
          , Springer International Publishing, Cham,
          <year>2018</year>
          , pp.
          <fpage>255</fpage>
          -
          <lpage>266</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>M.</given-names>
            <surname>Melles</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Albayrak</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Goossens</surname>
          </string-name>
          ,
          <article-title>Innovating health care: key characteristics of human-centered design</article-title>
          ,
          <source>International Journal for Quality in Health Care</source>
          <volume>33</volume>
          (
          <year>2020</year>
          )
          <fpage>37</fpage>
          -
          <lpage>44</lpage>
          . URL: https://doi.org/10.1093/intqhc/mzaa127. doi:
          <volume>10</volume>
          .1093/intqhc/mzaa127, tex.eprint: https://academic.oup.com/intqhc/article-pdf/33/Supplement\_
          <volume>1</volume>
          /37/36350849/mzaa127.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>P.</given-names>
            <surname>Kieseberg</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Weippl</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. M.</given-names>
            <surname>Tjoa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Cabitza</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Campagner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Holzinger</surname>
          </string-name>
          ,
          <article-title>Controllable ai - an alternative to trustworthiness in complex ai systems?</article-title>
          , in: A.
          <string-name>
            <surname>Holzinger</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Kieseberg</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          <string-name>
            <surname>Cabitza</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Campagner</surname>
            ,
            <given-names>A. M.</given-names>
          </string-name>
          <string-name>
            <surname>Tjoa</surname>
          </string-name>
          , E. Weippl (Eds.),
          <source>Machine Learning and Knowledge Extraction</source>
          , Springer Nature Switzerland, Cham,
          <year>2023</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>12</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <surname>I. O. for Standardization</surname>
          </string-name>
          ,
          <source>Iso</source>
          <volume>9241</volume>
          :
          <fpage>210</fpage>
          -
          <article-title>ergonomics of human-system interaction: Human-centred design for interactive systems</article-title>
          ,
          <year>2019</year>
          . URL: https://www.iso.org/standard/77520.html.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>R.</given-names>
            <surname>Iyer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Lewis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Sundar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Sycara</surname>
          </string-name>
          ,
          <article-title>Transparency and explanation in deep reinforcement learning neural networks</article-title>
          ,
          <source>in: Proceedings of the 2018 AAAI/ACM Conference on AI</source>
          ,
          <string-name>
            <surname>Ethics</surname>
          </string-name>
          , and Society, AIES '18,
          <string-name>
            <surname>Association</surname>
          </string-name>
          for Computing Machinery, New York, NY, USA,
          <year>2018</year>
          , p.
          <fpage>144</fpage>
          -
          <lpage>150</lpage>
          . URL: https://doi.org/10.1145/3278721.3278776. doi:
          <volume>10</volume>
          .1145/3278721.3278776.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>G.</given-names>
            <surname>Montavon</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Samek</surname>
          </string-name>
          ,
          <string-name>
            <surname>K.-R. Müller</surname>
          </string-name>
          ,
          <article-title>Methods for interpreting and understanding deep neural networks</article-title>
          ,
          <source>Digital Signal Processing</source>
          <volume>73</volume>
          (
          <year>2018</year>
          )
          <fpage>1</fpage>
          -
          <lpage>15</lpage>
          . URL: https://linkinghub.elsevier.com/retrieve/ pii/S1051200417302385. doi:
          <volume>10</volume>
          .1016/j.dsp.
          <year>2017</year>
          .
          <volume>10</volume>
          .011.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>B.</given-names>
            <surname>Shneiderman</surname>
          </string-name>
          ,
          <string-name>
            <surname>Human-Centered</surname>
            <given-names>AI</given-names>
          </string-name>
          ,
          <volume>1</volume>
          <fpage>ed</fpage>
          ., Oxford University PressOxford,
          <year>2022</year>
          . URL: https: //academic.oup.com/book/41126. doi:
          <volume>10</volume>
          .1093/oso/9780192845290.001.0001.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>D.</given-names>
            <surname>Gunning</surname>
          </string-name>
          , D. W. Aha,
          <article-title>Darpa's explainable artificial intelligence program</article-title>
          ,
          <source>AI</source>
          Magazine
          <volume>40</volume>
          (
          <year>2019</year>
          )
          <fpage>44</fpage>
          -
          <lpage>58</lpage>
          . URL: https://onlinelibrary.wiley.com/doi/abs/10. 1609/aimag.v40i2.2850. doi:https://doi.org/10.1609/aimag.v40i2.
          <fpage>2850</fpage>
          . arXiv:https://onlinelibrary.wiley.com/doi/pdf/10.1609/aimag.v40i2.
          <fpage>2850</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>L.</given-names>
            <surname>Enqvist</surname>
          </string-name>
          , '
          <article-title>Human oversight' in the EU artificial intelligence act: what, when and by whom?</article-title>
          ,
          <source>Law, Innovation and Technology</source>
          <volume>15</volume>
          (
          <year>2023</year>
          )
          <fpage>508</fpage>
          -
          <lpage>535</lpage>
          . URL: https://www.tandfonline.com/doi/full/ 10.1080/17579961.
          <year>2023</year>
          .
          <volume>2245683</volume>
          . doi:
          <volume>10</volume>
          .1080/17579961.
          <year>2023</year>
          .
          <volume>2245683</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>L. G.</given-names>
            <surname>Wei</surname>
          </string-name>
          <string-name>
            <surname>Xu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Marvin J.</given-names>
            <surname>Dainof</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Gao</surname>
          </string-name>
          ,
          <article-title>Transitioning to human interaction with ai systems: New challenges and opportunities for hci professionals to enable human-centered ai</article-title>
          ,
          <source>International Journal of Human-Computer Interaction</source>
          <volume>39</volume>
          (
          <year>2023</year>
          )
          <fpage>494</fpage>
          -
          <lpage>518</lpage>
          . doi:
          <volume>10</volume>
          .1080/10447318.
          <year>2022</year>
          .
          <volume>2041900</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>T. E.</given-names>
            <surname>Commission</surname>
          </string-name>
          ,
          <article-title>Proposal for a regulation of the european parliament and of the council laying down harmonised rules on asrtificial intelligence (artificial intelligence act) and amending certain union legislative acts</article-title>
          ,
          <year>2024</year>
          . URL: http://thomas.loc.gov/cgi-bin/query/z?c102:
          <string-name>
            <given-names>H.</given-names>
            <surname>CON</surname>
          </string-name>
          .
          <source>RES.1</source>
          .IH.
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>Gazzetta</given-names>
            <surname>Uficiale dell'Unione Europea</surname>
          </string-name>
          ,
          <article-title>General Data Protection Regulation (GDPR): Regulation (EU)</article-title>
          <year>2016</year>
          /679,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>H.</given-names>
            <surname>Sharp</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Preece</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Rogers</surname>
          </string-name>
          , Interaction Design:
          <article-title>beyond human-computer interaction</article-title>
          , 5 ed., John Wiley &amp; Sons, Inc.,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>P.</given-names>
            <surname>Bourque</surname>
          </string-name>
          , R. E. Fairley (Eds.),
          <article-title>SWEBOK: guide to the software engineering body of knowledge, version 3</article-title>
          .0 ed.,
          <source>IEEE Computer Society</source>
          , Los Alamitos, CA,
          <year>2014</year>
          . OCLC:
          <volume>880350861</volume>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>