<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>Ital-IA</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Symbiotic AI: What is the Role of Trustworthiness?</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Miriana Calvano</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Antonio Curci</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Rosa Lanzilotti</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Antonio Piccinno</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>University of Bari "Aldo Moro"</institution>
          ,
          <addr-line>Via Edoardo Orabona 4, 70125, Bari</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>University of Pisa</institution>
          ,
          <addr-line>Largo B. Pontecorvo 3, 56127, Pisa</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2024</year>
      </pub-date>
      <volume>29</volume>
      <fpage>29</fpage>
      <lpage>30</lpage>
      <abstract>
        <p>The design, development, and use of Artificial Intelligence (AI) is crucial in modern society. The traditional design of AI systems focuses on models with very high performances without highlighting how relevant the role of humans is in this context. To create AI systems that suit end users' needs and preferences, it is important to involve them in each phase of the system lifetime cycle. AI systems must present interfaces and interaction paradigms that enhance users' cognitive models, ensuring usability and a positive User Experience (UX). In this new scenario, Human-Computer Interaction (HCI) and AI contaminate each other leading to reach the human-AI symbiosis. Researchers should shift the focus toward Symbiotic AI (SAI) systems, which aims to enhance humans' abilities without replacing them. This manuscript presents preliminary considerations for the creation of a framework to design high-quality SAI systems and metrics that can be employed to appropriately evaluate them. Being a novel field, it focuses on the current investigation regarding the definition of the properties of SAI systems, stressing the importance of Trustworthiness, and whether new design principles for SAI systems can be extracted from the AI act.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Symbiotic AI</kwd>
        <kwd>Trustworthiness</kwd>
        <kwd>Design and Evaluation</kwd>
        <kwd>Human-Centered Approach</kwd>
        <kwd>AI Act (AIA)</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        lationship between humans and AI systems in which "the
human understands and intuitively reacts to the machine,
and the machine understands and intuitively reacts to
the human" [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ].
      </p>
      <p>
        To reach the human-AI symbiosis, users should trust
the system’s decisions and properly comprehend them,
making Trustworthiness one of the main properties to
consider when dealing with such systems. However, due
to the novelty of the field, limited work is available in
the literature. Our research aims to propose a
comprehensive framework and evaluation metrics to support
designers, developers, and AI specialists in creating and
evaluating Symbiotic AI (SAI) systems that inspire trust,
ensure fairness, and are responsible and compliant with
the various domains in which they operate [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ].
      </p>
      <p>This manuscript is structured as follows: Section 3
describes the approach that will be undertaken to design
and evaluate SAI systems; Section 2 presents how
trustworthiness can be defined in the SAI field, exploring the
perspectives of the European Commission and academia;
Section 4 concludes and explores the future work of the
project.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Trustworthiness for SAI Systems</title>
      <p>
        For people and society, trustworthiness is undoubtedly
one of the prerequisites that AI systems should have to
be used without hesitations [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]. It, therefore, becomes
the starting point of our research because of its breadth
and multifaceted nature. In this section, the concept of
trustworthiness is explored by analysing the perspectives
of European policymakers and academics to determine
how to consider it in the context of SAI.
      </p>
      <sec id="sec-2-1">
        <title>2.1. The European Commission</title>
      </sec>
      <sec id="sec-2-2">
        <title>Perspective</title>
        <p>This section focuses on two documents drafted by the
European Commission: the Ethics Guidelines for
Trustworthy AI and the AIA. The goal is to delineate a clear
image of the standpoints of policymakers to create AI
products that fully comply with laws, regulations, and
norms and track the eforts of the EU concerning human
rights, ethics, and philosophical issues.</p>
        <sec id="sec-2-2-1">
          <title>2.1.1. Ethics Guidelines for Trustworthy AI)</title>
          <p>
            The role of the AI HLEG is to define the approach of the
European Commission with respect to AI by indicating
the key principles and policies. In 2019, they drafted
the "Ethics Guidelines for Trustworthy AI" report, which
identifies seven requirements of Trustworthiness,
identiifed as the umbrella property to ensure a human-centric
approach to AI [
            <xref ref-type="bibr" rid="ref11 ref12">11, 12</xref>
            ], illustrated in Figure 1. Such
requirements are briefly described in the following:
• Human agency and Oversight: incorporating
mechanisms for human intervention in critical
decision-making processes ensures human
control and supervision over AI systems to prevent
unintended consequences.
• Technical Robustness and Safety:
developing AI systems necessitates a risk-preventive
approach that ensures reliable behavior,
minimizing and preventing unintentional and unexpected
harm.
• Privacy and Data Governance: ensuring
privacy protection requires robust data governance,
encompassing both the quality and integrity of
the data used in processing to guarantee privacy.
• Transparency: encompassing the transparency
of elements requires to comprehend the reason
that lies behind the decision taken by the system.
• Diversity, Non-Discrimination and Fairness:
involving all stakeholders throughout the entire
system lifecycle ensures equal access through
inclusive design processes and equitable treatment.
• Societal and Environmental Well-being:
maximizing sustainability, social impact, and
ecological responsibility of AI systems to positively
contribute to society while minimizing negative
consequences.
• Accountability: creating mechanisms to ensure
accountability of AI systems, both before and
after their development, deployment and use
guarantees fairness [
            <xref ref-type="bibr" rid="ref11">11</xref>
            ].
          </p>
        </sec>
        <sec id="sec-2-2-2">
          <title>2.1.2. The Artificial Intelligence Act (AIA)</title>
          <p>
            Starting from the requirements of Trustworthy AI, listed
in Section 2.1.1, in 2021, the EU has defined the AIA to
regulate the adoption of harmonised and standardized
rules for AI systems. Specifically, it merges
trustworthiness with the risk-based approach to determine the
acceptability of the types of systems through norms and
regulations [
            <xref ref-type="bibr" rid="ref12">12</xref>
            ]. The risk-based approach outlines four
categories of AI systems in relation to the risks they
might cause:
1. Unacceptable Risk: it encompasses systems that
might include prohibited AI practices that must
be banned to guarantee a well-functioning
society, such as those that might threaten minorities
or those used by public authorities.
2. High Risk: it regards systems used in fields such
as education and vocational training, access to
private and public services, law enforcement, etc.
(e.g., usable, observable, explainable, resilient, agile, etc.)
[
            <xref ref-type="bibr" rid="ref13">13</xref>
            ].
          </p>
          <p>The investigation of our research work consists in
understanding what principles are applicable to SAI and
identifying the potential new properties.</p>
        </sec>
      </sec>
      <sec id="sec-2-3">
        <title>2.3. The Impact of Trustworthiness in SAI</title>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Conceptual Framework for SAI</title>
    </sec>
    <sec id="sec-4">
      <title>Systems</title>
      <sec id="sec-4-1">
        <title>2.2. The Academic Perspective</title>
        <p>The starting point is understanding the gaps in the
traBen Shneiderman, one of the pioneers of HCI, proposes ditional approach to the development of AI systems to
trustworthiness as one of three principles, along with determine the changes to propose and the integration of
safety and reliability, of human-centered AI (HCAI) sys- new processes into the software lifecycle. This
conceptems, which guarantee an appropriate balance of automa- tual framework aims to support designers and developers
tion and human control. Specifically: in creating and evaluating SAI systems. The objective
• Trustworthiness concerns the property that makes is to provide a standardized methodology to those who
systems deserving of being trusted by humans. create AI-powered services that reduce the gap between
technology and humans and decrease cognitive demand
• Reliability comes from the application of techni- when interpreting and understanding the outputs that
cal practices of software engineering that build systems produce.
systems that produce appropriate and/or ex- The objective of this work lies in defining a
framepected responses. work that considers and merges the two perspectives (i.e.</p>
        <p>
          Ethics Guidelines for Trustworthy AI and AIA), while
• Safety is a strategy to guide the refinement of the identifying principles, guidelines, and techniques that
model performance to prevent potential failure belong to diferent disciplines by finding the appropriate
and improper use [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ]. links. Figure 2 presents an initial version of the
conceptual framework that consists in two layers, Design and
        </p>
        <p>Assessment, explained below.</p>
        <p>The three above mentioned properties are the most
recurrent in the literature since they are the main areas
of research and can encompass the other properties;
nevertheless, the state of the art concerning the human-AI
interaction, considers other 22 properties that can
influence the design and development of any kind of system</p>
      </sec>
      <sec id="sec-4-2">
        <title>3.1. Design</title>
        <p>This layer embraces four main research areas that
contribute equally: Human-Computer Interaction (HCI), Law
&amp; Ethics, Software Engineering (SE), and AI. The
following sections describe each component of the framework,
illustrating its role in the SAI scenario.</p>
        <p>
          Human-Computer Interaction (HCI) HCI is one of
the pivotal components of this framework because the
symbiotic relationship can be achieved if such systems
allow users to reach their goals with efectiveness,
eficiency, and satisfaction, thus, by being usable and
providing a positive user experience. Other key elements
that HCI is responsible for are feedback and afordance,
enabling humans to understand how the system should
be used, making them feel at ease with proper
communication [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ]. Involving humans iteratively during each
phase of the system’s lifecycle implies performing
interviews, questionnaires, field studies, and focus groups to
perform quantitative and qualitative evaluations of the
systems and to obtain rich insights concerning the users’
needs, preferences and cognitive models [
          <xref ref-type="bibr" rid="ref6 ref7">6, 7</xref>
          ].
Ethical &amp; Legal Factors This dimension considers the
regulatory, philosophical, and ethical standpoint since
designers and developers must create products that
preserve users’ social, working, and personal well-being.
One of the main issues concerning AI, which becomes
particularly valid for the branch of SAI, consists of
avoiding biases and ensuring fairness. This element must be
always considered because the root of biases is found
in how data is treated by AI models, for example, in the
learning phase. This determines the unfair behavior of
systems that can influence humans’ decisions when
employing AI as an instrument. The legal standpoint must
be considered for designing and developing AI systems
to create products that comply with regulations and can
be released to the public. Currently, the main elements
to consider are the AIA and the General Data
Protection Regulation (GDPR); the first regulates the design,
development, and use of AI systems in the EU, while the
GDPR is a law that defines how data is handled, stored,
and processed [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ].
        </p>
        <p>These regulations define the ethical principles that any
kind of system should possess to be available to society.</p>
        <sec id="sec-4-2-1">
          <title>Artificial Intelligence (AI) This dimension refers to</title>
          <p>
            AI from a technical and algorithmic standpoint because
the framework aims to suggest the appropriate
techniques and practices to adopt depending on the
requirements of the systems to create. AI models, along with
high computational power, can be employed in
multiple domains, such as business, finance, healthcare,
agriculture, smart cities, and cybersecurity; however, they
cannot be used as a one-size-fits-all solution because,
depending on the activities, diferent tasks are needed - e.g.,
classification, prediction, description -, raising the need
for context-specific models, parameters, and variables
[
            <xref ref-type="bibr" rid="ref16">16</xref>
            ]. The efectiveness of SAI systems is not guaranteed
by simply obtaining high-performing models but rather
by systems that properly integrate Transparency,
Explainability, and Interpretability. This provides users with the
right instruments to comprehend the processes behind
outputs, influencing their decisions, and what data is
responsible for the system’s responses.
          </p>
          <p>
            Software Engineering (SE) This framework aims to
guide design and developers in creating SAI systems,
ensuring that they operate by following a human-centered
approach while complying with legal requirements and
implementing high-performing AI systems. Thus, the
objective is to integrate the Agile principles and the
processes of the Agile Development Lifecycle with those
belonging to the SAI design, creating a mapping that
does not exclude any discipline [
            <xref ref-type="bibr" rid="ref17">17</xref>
            ].
          </p>
        </sec>
      </sec>
      <sec id="sec-4-3">
        <title>3.2. Assessment</title>
        <p>In this new scenario, where a strict correlation and
contamination exists between human and AI performance,
it becomes essential to define novel metrics to assess the
human-AI symbiotic relationship.</p>
        <p>Traditionally, human beings and AI have been viewed
as distinct and unrelated entities, causing UX and AI
metrics to be defined independently to evaluate both human
behavior and system performance. Considering them in
unison, it is possible to draft a preliminary set of
metrics that can be employed to assess the symbiosis. By
integrating both the dataset and user information and
considering the user’s characteristics from the training
phase of the AI model, it is possible to foster
symbiosis, making the system’s behaviour as much as possible
adaptable to the user’s needs.</p>
        <p>
          Since Trustworthiness allows users to trust systems that
operate safely and exhibit reliable behavior, it is
contemplated as one of the starting points of this research work
[
          <xref ref-type="bibr" rid="ref4">4</xref>
          ]. Assessing this aspect is dificult since it varies across
many application contexts [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ]; therefore, it is necessary
to understand whether its evaluation should consider
it as a stand-alone property or as an ensemble of other
dimensions, such as safety, fairness, robustness, etc1.
        </p>
        <p>Two potential metrics are proposed to assess how
Trustworthy an AI system is: Preventing Undesired
System Behaviors, which refers to how efectively the system
avoids actions that could potentially harm the user or
deviate from expected behavior; Correctness of Decisions,
which measures the extent to which system’s decisions
align with user expectations and desired outcomes.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>4. Conclusions</title>
      <p>This paper presents preliminary considerations
concerning the novel field of Symbiotic AI with respect
Trustworthiness. It presents the main challenges of identifying
the principles of this field while stressing the need for a
human-centered approach when dealing with AI systems
of any kind. This research work is the starting ground for
the definition of a comprehensive framework, presented
in Section 3, that encompasses multiple disciplines and
aims to guide designers and developers in creating SAI
systems. This framework is still in its early stages and at
a conceptual state. Delineating a standardized approach
1https://ec.europa.eu/futurium/en/ai-alliance-consultation.1.html
to assess the behavior and performance of such systems
is crucial to ensure the proper deployment of AI, which is
part of the daily lives of countless individuals. As
Trustworthiness plays a pivotal role in an efective human-AI
interaction, the future of this research will focus on
determining its complementary principles and its impact
on symbiosis by carrying out verticalized case studies
and performing in-depth investigations in the literature.</p>
    </sec>
    <sec id="sec-6">
      <title>Acknowledgments</title>
      <p>The research of Miriana Calvano and Antonio Curci is
supported by the co-funding of the European Union
Next Generation EU: NRRP Initiative, Mission 4,
Component 2, Investment 1.3 – Partnerships extended to
universities, research centers, companies, and research
D.D. MUR n. 341 del 15.03.2022 – Next Generation EU
(PE0000013 – “Future Artificial Intelligence Research –
FAIR” - CUP: H97G22000210007).</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>T. E.</given-names>
            <surname>Commission</surname>
          </string-name>
          ,
          <article-title>Proposal for a regulation of the european parliament and of the council laying down harmonised rules on asrtificial intelligence (artificial intelligence act) and amending certain union legislative acts</article-title>
          ,
          <year>2024</year>
          . URL: http://thomas.loc .gov/cgi-bin/query/z?c102:
          <string-name>
            <given-names>H.</given-names>
            <surname>CON</surname>
          </string-name>
          .
          <source>RES.1</source>
          .IH.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>C.</given-names>
            <surname>Sanderson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Douglas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            <surname>Lu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Schleiger</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Whittle</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Lacey</surname>
          </string-name>
          , G. Newnham,
          <string-name>
            <given-names>S.</given-names>
            <surname>Hajkowicz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Robinson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Hansen</surname>
          </string-name>
          ,
          <article-title>Ai ethics principles in practice: Perspectives of designers and developers</article-title>
          ,
          <source>IEEE Transactions on Technology and Society</source>
          <volume>4</volume>
          (
          <year>2023</year>
          )
          <fpage>171</fpage>
          -
          <lpage>187</lpage>
          . URL: http://dx.doi.org/10.1109/TTS.
          <year>2023</year>
          .
          <volume>3257303</volume>
          . doi:
          <volume>10</volume>
          .1109/tts.
          <year>2023</year>
          .
          <volume>3257303</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>R.</given-names>
            <surname>Guidotti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Monreale</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Ruggieri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Turini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Giannotti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Pedreschi</surname>
          </string-name>
          ,
          <article-title>A Survey of Methods for Explaining Black Box Models</article-title>
          ,
          <source>ACM Computing Surveys</source>
          <volume>51</volume>
          (
          <year>2019</year>
          )
          <fpage>1</fpage>
          -
          <lpage>42</lpage>
          . URL: https://dl.acm.org/doi /10.1145/3236009. doi:
          <volume>10</volume>
          .1145/3236009.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>B.</given-names>
            <surname>Shneiderman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Plaisant</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Cohen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Jacobs</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Elmqvist</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Diakopoulos</surname>
          </string-name>
          ,
          <article-title>Designing the User Interface: Strategies for Efective Human-Computer Interaction</article-title>
          , 6 ed.,
          <source>Pearson Education</source>
          ,
          <year>2016</year>
          . URL: ht tps://books.google.it/books?id=PpItDAAAQBAJ.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>I. O. for Standardization</surname>
          </string-name>
          ,
          <source>Iso</source>
          <volume>9241</volume>
          :
          <fpage>210</fpage>
          -
          <article-title>ergonomics of human-system interaction</article-title>
          ,
          <year>2019</year>
          . URL: https:// www.iso.org/standard/77520.html.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>H.</given-names>
            <surname>Sharp</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Preece</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Rogers</surname>
          </string-name>
          , Interaction Design:
          <article-title>beyond human-computer interaction</article-title>
          , 5 ed., John Wiley &amp; Sons, Inc.,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <surname>I. O. for Standardization</surname>
          </string-name>
          ,
          <source>Iso</source>
          <volume>9241</volume>
          :
          <fpage>210</fpage>
          -
          <article-title>ergonomics of human-system interaction: Human-centred design for interactive systems</article-title>
          ,
          <year>2019</year>
          . URL: https: //www.iso.org/standard/77520.html.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>P.</given-names>
            <surname>Bourque</surname>
          </string-name>
          , R. E. Fairley (Eds.),
          <article-title>SWEBOK: guide to the software engineering body of knowledge, version 3</article-title>
          .0 ed.,
          <source>IEEE Computer Society</source>
          , Los Alamitos, CA,
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>S. S.</given-names>
            <surname>Grigsby</surname>
          </string-name>
          ,
          <article-title>Artificial intelligence for advanced human-machine symbiosis</article-title>
          , in: D. D.
          <string-name>
            <surname>Schmorrow</surname>
          </string-name>
          ,
          <string-name>
            <surname>C. M. Fidopiastis</surname>
          </string-name>
          (Eds.),
          <source>Augmented Cognition: Intelligent Technologies</source>
          , Springer International Publishing, Cham,
          <year>2018</year>
          , pp.
          <fpage>255</fpage>
          -
          <lpage>266</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>M.</given-names>
            <surname>Vahabava</surname>
          </string-name>
          ,
          <article-title>The risks associated with generative AI apps in the European Artificial Intelligence Act (AIA)</article-title>
          ,
          <source>in: Workshops at the Second International Conference on Hybrid Human-Artificial Intelligence (HHAI)</source>
          ,
          <source>CEUR Workshop Proceedings</source>
          , Munich, Germany,
          <year>2023</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>12</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>E.</given-names>
            <surname>Commission</surname>
          </string-name>
          ,
          <article-title>European commission - ethics guidelines for trustworthy ai</article-title>
          ,
          <year>2021</year>
          . URL: https: //ec.europa.eu/f uturium/en/ai-alliance- consu
          <source>ltation.1</source>
          .html.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>J.</given-names>
            <surname>Laux</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Wachter</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Mittelstadt</surname>
          </string-name>
          ,
          <article-title>Trustworthy artificial intelligence and the European Union &lt;span style="font-variant:small-caps;"&gt;AI&lt;/span&gt; act: On the conflation of trustworthiness and acceptability of risk</article-title>
          ,
          <source>Regulation &amp; Governance</source>
          <volume>18</volume>
          (
          <year>2024</year>
          )
          <fpage>3</fpage>
          -
          <lpage>32</lpage>
          . URL: https://onlinelibrary.wiley.com/doi/10.1111/ rego.12512. doi:
          <volume>10</volume>
          .1111/rego.12512.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>B.</given-names>
            <surname>Shneiderman</surname>
          </string-name>
          ,
          <string-name>
            <surname>Human-Centered</surname>
            <given-names>AI</given-names>
          </string-name>
          ,
          <volume>1</volume>
          <fpage>ed</fpage>
          ., Oxford University PressOxford,
          <year>2022</year>
          . URL: https://academ ic.oup.com/book/41126. doi:
          <volume>10</volume>
          .1093/oso/9780 192845290.001.0001.
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>B.</given-names>
            <surname>Kitchenham</surname>
          </string-name>
          ,
          <article-title>Procedures for Performing Systematic Reviews</article-title>
          ,
          <source>Technical Report</source>
          , Keele University,
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>Gazzetta</given-names>
            <surname>Uficiale dell'Unione Europea</surname>
          </string-name>
          ,
          <article-title>General Data Protection Regulation (GDPR): Regulation (EU)</article-title>
          <year>2016</year>
          /679,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <surname>I. Sarker</surname>
          </string-name>
          ,
          <article-title>Ai-based modeling: Techniques, applications and research issues towards automation</article-title>
          ,
          <source>intelligent and smart systems</source>
          ,
          <year>2022</year>
          . doi:
          <volume>10</volume>
          .20944 /preprints202202.
          <fpage>0001</fpage>
          .
          <year>v1</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>D.</given-names>
            <surname>Salah</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. F.</given-names>
            <surname>Paige</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Cairns</surname>
          </string-name>
          ,
          <article-title>A systematic literature review for agile development processes and user centred design integration</article-title>
          ,
          <source>in: Proceedings of the 18th International Conference on Evaluation and Assessment in Software Engineering</source>
          , ACM, London England United Kingdom,
          <year>2014</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>10</lpage>
          . URL: https://dl.acm.org/doi/10.1145/2601248.26012 76. doi:
          <volume>10</volume>
          .1145/2601248.2601276.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>