<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>An AI Act-Driven Design for Detecting Brain Tumors through Reconfiguration</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Antonio Curci</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Andrea Esposito</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Computer Science, University of Bari Aldo Moro</institution>
          ,
          <addr-line>Via E. Orabona 4, 70125 Bari</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Department of Computer Science, University of Pisa</institution>
          ,
          <addr-line>Largo B. Pontecorvo 3, 56127 Pisa</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Although Artificial Intelligence (AI) is permeating countless domains of application in modern society, it is important to design, develop, and deploy AI-based software that safeguards humans and their well-being. The AI Act, the European Union's legal framework to regulate AI, sets a new standard that must be met when creating such systems, which must protect human rights and emphasize human agency in decision-making processes. This research proposes the architecture of an interaction paradigm, designed starting from AI Act principles, aiming to support medical physicians in detecting brain tumors through a multi-modal model. The goal is to establish a symbiotic relationship between humans and AI in which the limitations of one can be compensated by the strengths of the other, while highlighting the importance of humans' judgment and expertise in making diagnoses.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Symbiotic Artificial Intelligence</kwd>
        <kwd>Multi-Modal Model</kwd>
        <kwd>Medicine</kwd>
        <kwd>Decision-Making</kwd>
        <kwd>Human-AI Collaboration</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        As scientific and technological progress advances at a very fast pace, Artificial Intelligence ( AI) becomes
more and more integrated in everyday activities. AI-based systems can vary depending on the domain
in which they are deployed and used, being powered by diferent models, technologies, and interaction
mechanisms [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
      </p>
      <p>
        Although AI can strongly support humans in performing repetitive and time-consuming tasks,
there are several challenges that such systems can introduce regarding ethics, societal well-being
and safety, and human agency [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. In 2024, the European Union (EU) released a legal framework,
Artificial Intelligence Act (AI Act), with the goal of regulating the creation, deployment, and use of AI. It
undertakes a human-centric and risk-based approach that considers humans in all of their dimensions,
regardless of their role of users that interact with a system [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. The constraints and obligations that
this legal framework introduces depend on the domain the system is intended for and the risks it could
impose on humans and society. The AI Act strongly stresses the role of Trustworthiness of AI systems:
it is obtained over time when using the system, while being a necessary precondition for regulatory
compliance, as it allows increase the system’s adoption and acceptance in humans’ workflows.
      </p>
      <p>
        Among the numerous fields in which AI is being introduced—e.g., education, industry, finance—
medicine can be one of the most critical. AI is bringing substantial aid to physicians and patients,
translating into faster diagnoses, more efective therapies, and significant steps forward in research.
At the same time, several challenges must be taken into account: if these tools are misused or provide
wrong suggestions to physicians, the consequences might be highly severe or, in some cases, irreversible
[
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. This raises the need for creating AI systems that emphasize human agency while fostering efective
collaboration with humans, making both parties work towards a common goal. The category of systems
that are characterized by such features is called Symbiotic Artificial Intelligence, which encompasses a
subset of Human-Centered Artificial Intelligence ( HCAI) systems that aim at enhancing humans skills,
compensating for their limitations, and exploiting the interaction process to learn over time [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. The
factors that influence the establishment of a trustworthy human– AI relationship are multifaceted. For
instance, ensuring that humans are in-the-loop can strongly afect the development of trust dynamics.
In Symbiotic Artificial Intelligence ( SAI), for example, keeping humans in control and informed about
the processes that lie behind AI’s output can be the gateway for enabling both parties to learn over
time and exhibit adaptive behavior. There are several techniques that can be employed to implement
interaction paradigms that support the integration of human feedback in the model’s adaptation—for
example, explainability. In this scenario, it can have a two-fold objective: first, it allows the system
to provide users with explanations about its decision-making process [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] and, second, it serves as an
instrument for humans to indicate where to intervene in the correction of the output [
        <xref ref-type="bibr" rid="ref6 ref7">6, 7</xref>
        ].
      </p>
      <p>
        This research work introduces the proposal of a new AI-based system, called BrainDetect, that aims
to detect brain tumors based on gray-scale 2-D Magnetic Resonance Imaging (MRI) scans and tabular
data concerning the image. It is powered by a multi-modal model presented in [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ], for which a User
Interface (UI) is being created along with an interaction mechanism that exploits Gradient-weighted
Class Activation Mapping (GradCAM) [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] explanations output to retrain the model based on human
expertise.
      </p>
      <p>The article is structured as follows: section 2 discusses the importance of keeping humans in-the-loop
and at the center of the decision-making process, exploring an interaction paradigm and the AI Act;
section 3 illustrates the proposed architecture with the explanation-based intervention mechanism and
presents a prototype of the UI; section 4 reports the conclusions and the future work of the research.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Keeping Humans in the Loop</title>
      <p>
        AI is strongly contributing to the diagnosis of diseases and illnesses thanks to its ability to process large
amounts of data in short amounts of time, supporting physicians in detection and recognition activities
[
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. The case of tumors, represent an exemplary case in which AI can be substantially helpful. This
research work focuses on brain tumors, which are abnormal growths of cells within the brain or its
surrounding structures, which can be either benign or malignant [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]. These tumors pose a significant
health concern due to their complex and heterogeneous nature, rapid progression, and high mortality
rates. Early and accurate detection is critical, as it can improve the efectiveness of therapies, reducing
the risk of irreversible neurological damage [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ].
      </p>
      <p>
        The models that power the AI-based solutions for tumor detection are progressively improving,
providing more support to humans. At the same time, the level of sophistication comes with the cost
of complexity, which is proportionally increasing over time. In this regard, a technique that has been
gaining more interest in the last few years is the use of more than one modality of data to train an AI
model. Multi-modal approaches can increase accuracy, taking into account multiple and heterogeneous
aspects and contributing to more reliable outcomes [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ].
      </p>
      <sec id="sec-2-1">
        <title>2.1. Interactive Machine Learning for Reconfiguration</title>
        <p>
          When it comes to creating AI systems that support physicians in performing such delicate tasks (e.g.,
tumor detection, tumor treatment), designers and developers might face several challenges in letting
users be properly aware of the processes that lie behind the systems’ output and the motivations that
led to the outcomes. Transparency, which is the the intelligibility of the algorithm itself and its inner
workings [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ], plays a crucial role in this context, as it enables users to obtain insights about the model,
its structure, and processes. Explainability, on the other hand, indicates the property of the model
to generate human-understandable explanations of its outcomes and decision-making processes [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ].
Although black-box models should be avoided [
          <xref ref-type="bibr" rid="ref14 ref15">14, 15</xref>
          ], their high performance often justifies their
use, thus making post-hoc explainability techniques useful as a workaround [
          <xref ref-type="bibr" rid="ref16 ref5">5, 16</xref>
          ]. In the case of
convolutional processes for images, one of the most widely used methods is Class Activation Mapping
(CAM), specifically, GradCAM, which highlights the spatial regions in the input image that most
influence the model’s reasoning by leveraging the gradients of the target class with respect to the
feature maps [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ]. These methods can have a double-sided function, representing both the explanation
of the reasoning process and the instrument to modify or correct the outcome reached by the model.
Exploiting such explanations for reconfiguring the model can be particularly useful for implementing
Interactive Machine Learning (IML), which allows human expertise to be integrated in the model,
adjusting its performance based on their judgement and experience [
          <xref ref-type="bibr" rid="ref17">17</xref>
          ]. The integration of IML into
workflows can represent a significant step towards the establishment of a symbiotic relationship between
humans and AI, improving collaboration [
          <xref ref-type="bibr" rid="ref18">18</xref>
          ].
        </p>
        <p>
          In this regard, the interaction paradigm presented by Desolda et al. is introduced, which highlights
that humans must be provided with the necessary instruments to make informed decisions when
using an AI system, especially in medicine, while being enabled to iteratively be part of the model’s
reasoning [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ]. The paradigm has three building blocks at its core: Clarification, Reconfiguration, and
Iterative Exploration. More specifically, Clarification concerns providing users with usable explanations
concerning how the system reached its output, Reconfiguration enables physicians to revise and check
the outputs, correcting the system’s response when necessary, and Iterative Exploration represents the
strategy that allows users to perform decision-making step-by-step and iteratively [
          <xref ref-type="bibr" rid="ref19">19</xref>
          ].
        </p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. The AI Act and Decision Making</title>
        <p>
          The AI Act is reshaping the way that AI systems are being created and deployed, introducing new
obligations that aim at safeguarding the well-being of humans and society [
          <xref ref-type="bibr" rid="ref20 ref3">3, 20</xref>
          ]. The legal framework
introduces a risk-based classification of AI systems: unacceptable risk, high risk, limited risk, and
minimal risk. Depending on this classification, these systems must comply with various obligations
and standards concerning multiple aspects, ranging from ensuring human oversight and control to
requiring high quality documentation from deployers [
          <xref ref-type="bibr" rid="ref21">21</xref>
          ]. For instance, Article 10 sets a standard
concerning training data which must be fair, representative, and free from bias [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ]. With respect to
decision-making and activities that can have an impact on other individuals, Article 14 emphasizes that
AI systems must be designed to allow human intervention or override, ensuring humans remain in
control over critical decisions [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ].
        </p>
        <p>
          This research work relies on the main principles that the AI Act is based on, highlighting the
importance of its application in decision-making scenarios. If properly implemented, the legal framework
can contribute to the achievement of a symbiotic relationship between humans and AI, which finds an
almost natural application in scenarios in which humans are required to make choices. Decision-making
is a very delicate and intricate process influenced by various factors that touch on cognition, emotions,
expertise, and personal experience [
          <xref ref-type="bibr" rid="ref22">22</xref>
          ]. Any external input, such as AI’s responses, can alter physicians’
traditional way of carrying out tasks like creating diagnoses or therapies. Thus, it must ensure that its
users are provided with the proper instruments and conditions to reach outcomes that are not harmful
to society or other individuals [
          <xref ref-type="bibr" rid="ref23">23</xref>
          ]. For example, in the case of brain tumor detection, a wrong diagnosis,
blindly accepted by a physician, can lead to unnecessary treatments, which could seriously damage
patients’ health. This implies that humans should trust AI only if they are put in the conditions to use
their judgment to distinguish the appropriateness of the outputs, even if they are not AI specialists or
computer scientists [
          <xref ref-type="bibr" rid="ref24 ref25">24, 25</xref>
          ].
        </p>
        <p>
          The application of the interaction paradigm and the research presented in [
          <xref ref-type="bibr" rid="ref26">26</xref>
          ] provided the
instruments to build the proposals of the interaction paradigm for BrainDetect, as well as the initial
wire-frame prototypes of its UI, as described in the next section.
        </p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Explanation-Based Intervention</title>
      <p>
        The multi-modal model that BrainDetect features is composed of two main channels that are merged
into one through a concatenation layer. The two inputs that it supports are 2-D grayscale MRI scan
images of human brain and their relative tabular data [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. Although the current model exhibits high
levels of accuracy (99%), it is important to ensure that end-users are provided with the right instruments
to determine the correctness of its outputs and intervene when necessary.
      </p>
      <p>
        This research proposes an architecture of an IML system [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ] based on GradCAM explainability
output generated upon the classification of human brain MRI scans to detect tumors. The interaction
paradigm in question is illustrated in fig. 1. The goal is to keep humans always in-the-loop, enabling
them to adjust the AI model’s reasoning process based on their expertise and knowledge, ensuring
a suitable level of automation of the system for carrying out their task properly. At the same time,
transparency is also strongly considered by integrating an explainability, GradCAM, to ensure that
physicians can grasp the areas of interest of the model.
      </p>
      <p>
        After receiving the MRI scan and tabular data as input to the system, the model processes them and
provides a binary classification output: ill or healthy. The physician can either agree (case 1 in fig. 1)
with the classification or disagree with it. In the latter case, the physician either has not detected a
tumor at all (case 2 in fig. 1), or has detected a tumor elsewhere with respect to the areas highlighted by
the system (case 3 in fig. 1). In both cases, human feedback is provided to the model, afecting its future
decisions by reinforcing or inhibiting its behavior. This can be implemented, for example, through
Reinforcement Learning (RL) from human feedback [
        <xref ref-type="bibr" rid="ref27">27</xref>
        ].
      </p>
      <p>In case 1, the feedback is sent to the model with no further details. In cases 2 and 3, the user is led
to the Reconfiguration Screen, illustrated in fig. 2. Here, the MRI appears subdivided into  patches of
equal size, each clickable and available for selection. By selecting one patch (or multiple adjacent ones),
physicians indicate to the system an area that may contain a tumor.</p>
      <p>
        If the physician disagrees with the AI system, they are asked to express their confidence in their
decision—for example, through a simple semantic diferential scale (see fig. 2. This allows the model
to weigh human feedback during its adaptation. Although further investigation is needed, this design
decision was made since corrections can be noisy or, at times, wrong. It represents a way of “letting
AI know” that the user is in disagreement with its output but still uncertain. To reach high-quality
outcomes, it is important to avoid fitting the model to potentially incorrect corrections, which could
hinder the human–AI trust dynamic [
        <xref ref-type="bibr" rid="ref1">28, 1</xref>
        ].
      </p>
      <p>The final objective is to enable continuous learning on the system’s behalf, with humans guiding the
process by correcting mistakes or highlighting important features.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Conclusions</title>
      <p>
        The strong impact that AI has on modern society is being regulated by legal bodies working towards
a more ethical and safe creation and deployment of such systems, especially in application domains
requiring decisions that can impact other individuals. Medicine is the domain analyzed in this research,
which proposes an architecture for a multi-modal model that detects brain tumors. The interaction
paradigm focuses on complying with the AI Act, keeping humans in-the-loop by ensuring that they
can revise and check the predictions made by the system, and correcting potential mistakes made
by the model. The ultimate goal, as mentioned in the sections above, is to reach symbiosis between
humans and AI, where both can learn from each other, improving over time, and compensating for
the limitations with the other’s strength [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. Making BrainDetect fall under the category of SAI is an
objective that is being undertaken from the beginning of the project, which is serving as a case study
for the investigation of the necessary instruments to pursue Symbiosis-by-Design.
      </p>
      <p>
        Currently, the work presented here is mostly a proposal: although the actual AI model for classification
exists (see [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]), ongoing research eforts aim at introducing human feedback in the AI model training
and in implementing the interaction loop presented in fig. 1. Therefore, future work of this research
regards implementing and refining BrainDetect by adhering to Human-Centred Design principles [ 29].
      </p>
      <p>Through user studies, the interaction loop presented in fig. 1 could be further refined by exploring
additional implicit factors (e.g., decision-making time) that could provide indications on the evolution
of human–AI trust. Such factors could be instrumental to the model adaptation.</p>
      <p>It is also intended to investigate the integration of the selection of non-adjacent areas of the brain in
the reconfiguration step, specifically for critical patients with a brain that has multiple ill regions. An
additional user study is required to assess the efectiveness of this proposal in accomplishing human– AI
symbiosis by analyzing how the interaction mechanism proposed in fig. 1 impacts users’ performance
and their trust in AI.</p>
    </sec>
    <sec id="sec-5">
      <title>Acknowledgments</title>
      <p>The research of Antonio Curci is supported by the co-funding of the European Union - Next Generation
EU: NRRP Initiative, Mission 4, Component 2, Investment 1.3 – Partnerships extended to universities,
research centers, companies, and research D.D. MUR n. 341 del 15.03.2022 – Next Generation EU
(PE0000013 – “Future Artificial Intelligence Research – FAIR” - CUP: H97G22000210007). The research
of Andrea Esposito is funded by a Ph.D. fellowship within the framework of the Italian “D.M. n. 352,
April 9, 2022”- under the National Recovery and Resilience Plan, Mission 4, Component 2, Investment
3.3 – Ph.D. Project “Human-Centred Artificial Intelligence (HCAI) techniques for supporting end users
interacting with AI systems,” co-supported by “Eusoft S.r.l.” (CUP H91I22000410007).</p>
    </sec>
    <sec id="sec-6">
      <title>Declaration on Generative AI</title>
      <p>The author(s) have not employed any Generative AI tools.
[28] P. Kieseberg, E. Weippl, A. M. Tjoa, F. Cabitza, A. Campagner, A. Holzinger, Controllable AI - An
Alternative to Trustworthiness in Complex AI Systems?, in: A. Holzinger, P. Kieseberg, F. Cabitza,
A. Campagner, A. M. Tjoa, E. Weippl (Eds.), Machine Learning and Knowledge Extraction, volume
14065, Springer Nature Switzerland, Cham, 2023, pp. 1–12. URL: https://link.springer.com/10.
1007/978-3-031-40837-3_1. doi:10.1007/978-3-031-40837-3_1, series Title: Lecture Notes in
Computer Science.
[29] ISO, 9241-210:2019 Ergonomics of human-system interaction — Part 210: Human-centred design
for interactive systems, 2019. URL: https://www.iso.org/standard/77520.html.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>B.</given-names>
            <surname>Shneiderman</surname>
          </string-name>
          ,
          <string-name>
            <surname>Human-Centered</surname>
            <given-names>AI</given-names>
          </string-name>
          ,
          <volume>1</volume>
          <fpage>ed</fpage>
          ., Oxford University PressOxford,
          <year>2022</year>
          . URL: https: //academic.oup.com/book/41126. doi:
          <volume>10</volume>
          .1093/oso/9780192845290.001.0001.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>F.</given-names>
            <surname>Paternò</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Burnett</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Fischer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Matera</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Myers</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Schmidt</surname>
          </string-name>
          ,
          <article-title>Artificial Intelligence versus End-User Development: A Panel on What Are the Tradeofs in Daily Automations?</article-title>
          , in: C. Ardito,
          <string-name>
            <given-names>R.</given-names>
            <surname>Lanzilotti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Malizia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Petrie</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Piccinno</surname>
          </string-name>
          , G. Desolda,
          <string-name>
            <surname>K.</surname>
          </string-name>
          Inkpen (Eds.),
          <source>HumanComputer Interaction - INTERACT</source>
          <year>2021</year>
          , volume
          <volume>12936</volume>
          , Springer International Publishing, Cham,
          <year>2021</year>
          , pp.
          <fpage>340</fpage>
          -
          <lpage>343</lpage>
          . URL: https://link.springer.com/10.1007/978-3-
          <fpage>030</fpage>
          -85607-6_
          <fpage>33</fpage>
          . doi:
          <volume>10</volume>
          .1007/ 978-3-
          <fpage>030</fpage>
          -85607-6_33, series Title: Lecture Notes in Computer Science.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>European</given-names>
            <surname>Parliament</surname>
          </string-name>
          ,
          <source>Council of the European Union, Regulation of the European Parliament and of the Council Laying down Harmonised Rules on Artificial Intelligence and Amending Regulations (EC) No</source>
          <volume>300</volume>
          /
          <year>2008</year>
          , (EU) No 167/
          <year>2013</year>
          , (EU) No 168/
          <year>2013</year>
          , (EU)
          <year>2018</year>
          /858,
          <string-name>
            <surname>(</surname>
            <given-names>EU</given-names>
          </string-name>
          )
          <year>2018</year>
          /1139 and (EU)
          <year>2019</year>
          /2144 and Directives 2014/90/EU, (EU)
          <year>2016</year>
          /797 and (EU)
          <year>2020</year>
          /1828 (Artificial
          <issue>Intelligence Act</issue>
          ),
          <year>2024</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>W.</given-names>
            <surname>Xiong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Fan</surname>
          </string-name>
          , L. Ma, C. Wang,
          <article-title>Challenges of human-machine collaboration in risky decisionmaking</article-title>
          ,
          <source>Frontiers of Engineering Management</source>
          <volume>9</volume>
          (
          <year>2022</year>
          )
          <fpage>89</fpage>
          -
          <lpage>103</lpage>
          . URL: https://link.springer.
          <source>com/10. 1007/s42524-021-0182-0</source>
          . doi:
          <volume>10</volume>
          .1007/s42524-021-0182-0.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>G.</given-names>
            <surname>Desolda</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Esposito</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Lanzilotti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Piccinno</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. F.</given-names>
            <surname>Costabile</surname>
          </string-name>
          ,
          <article-title>From human-centered to symbiotic artificial intelligence: a focus on medical applications, Multimedia Tools and Applications (</article-title>
          <year>2024</year>
          ). URL: https://link.springer.
          <source>com/10.1007/s11042-024-20414-5</source>
          . doi:
          <volume>10</volume>
          .1007/ s11042-024-20414-5.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>G.</given-names>
            <surname>Desolda</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Dimauro</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Esposito</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Lanzilotti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Matera</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Zancanaro</surname>
          </string-name>
          ,
          <article-title>A Human-AI interaction paradigm and its application to rhinocytology</article-title>
          ,
          <source>Artificial Intelligence in Medicine</source>
          <volume>155</volume>
          (
          <year>2024</year>
          )
          <article-title>102933</article-title>
          . URL: https://linkinghub.elsevier.com/retrieve/pii/S0933365724001751. doi:
          <volume>10</volume>
          . 1016/j.artmed.
          <year>2024</year>
          .
          <volume>102933</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>A.</given-names>
            <surname>Esposito</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Calvano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Curci</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Greco</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Lanzilotti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Piccinno</surname>
          </string-name>
          ,
          <article-title>Explanation-Driven Interventions for Articfiial Intelligence Model Customization: Empowering End-Users to Tailor Black-Box AI in Rhinocytology</article-title>
          , in: C.
          <string-name>
            <surname>Santoro</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Schmidt</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Matera</surname>
            ,
            <given-names>A</given-names>
          </string-name>
          . Bellucci (Eds.),
          <string-name>
            <surname>End-User</surname>
            <given-names>Development</given-names>
          </string-name>
          , volume
          <volume>15713</volume>
          , Springer Nature Switzerland, Cham,
          <year>2025</year>
          , pp.
          <fpage>161</fpage>
          -
          <lpage>170</lpage>
          . URL: https: //link.springer.com/10.1007/978-3-
          <fpage>031</fpage>
          -95452-8_
          <fpage>10</fpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>031</fpage>
          -95452-8_
          <fpage>10</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>A.</given-names>
            <surname>Curci</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Esposito</surname>
          </string-name>
          ,
          <article-title>Detecting Brain Tumors Through Multimodal Neural Networks</article-title>
          ,
          <source>in: 13th International Conference on Pattern Recognition Applications and Methods, SCITEPRESS - Science and Technology Publications</source>
          , Lda., Rome, Italy,
          <year>2024</year>
          , pp.
          <fpage>995</fpage>
          -
          <lpage>1000</lpage>
          . URL: https://arxiv.org/abs/ 2402.00038. doi:
          <volume>10</volume>
          .5220/0012608600003654.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>R. R.</given-names>
            <surname>Selvaraju</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Cogswell</surname>
          </string-name>
          ,
          <string-name>
            <surname>A. Das</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Vedantam</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Parikh</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Batra</surname>
          </string-name>
          , Grad-CAM:
          <article-title>Visual Explanations from Deep Networks via Gradient-Based Localization</article-title>
          ,
          <source>International Journal of Computer Vision</source>
          <volume>128</volume>
          (
          <year>2020</year>
          )
          <fpage>336</fpage>
          -
          <lpage>359</lpage>
          . URL: http://link.springer.
          <source>com/10.1007/s11263-019-01228-7</source>
          . doi:
          <volume>10</volume>
          .1007/s11263-019-01228-7.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>D.</given-names>
            <surname>Göndöcs</surname>
          </string-name>
          , V. Dörfler,
          <article-title>AI in medical diagnosis: AI prediction &amp; human judgment</article-title>
          ,
          <source>Artificial Intelligence in Medicine</source>
          <volume>149</volume>
          (
          <year>2024</year>
          )
          <article-title>102769</article-title>
          . URL: https://linkinghub.elsevier.com/retrieve/pii/ S0933365724000113. doi:
          <volume>10</volume>
          .1016/j.artmed.
          <year>2024</year>
          .
          <volume>102769</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>D. N.</given-names>
            <surname>Louis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Perry</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Reifenberger</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. Von</given-names>
            <surname>Deimling</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Figarella-Branger</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W. K.</given-names>
            <surname>Cavenee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Ohgaki</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O. D.</given-names>
            <surname>Wiestler</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Kleihues</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. W.</given-names>
            <surname>Ellison</surname>
          </string-name>
          ,
          <article-title>The 2016 World Health Organization Classification of Tumors of the Central Nervous System: a summary</article-title>
          ,
          <source>Acta Neuropathologica</source>
          <volume>131</volume>
          (
          <year>2016</year>
          )
          <fpage>803</fpage>
          -
          <lpage>820</lpage>
          . URL: http://link.springer.
          <source>com/10.1007/s00401-016-1545-1</source>
          . doi:
          <volume>10</volume>
          .1007/ s00401-016-1545-1.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>R.</given-names>
            <surname>Stupp</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Weller</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Belanger</surname>
          </string-name>
          ,
          <string-name>
            <given-names>U.</given-names>
            <surname>Bogdahn</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. K.</given-names>
            <surname>Ludwin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Lacombe</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. O.</given-names>
            <surname>Mirimanof</surname>
          </string-name>
          ,
          <article-title>Radiotherapy plus Concomitant and Adjuvant Temozolomide for Glioblastoma</article-title>
          ,
          <source>n engl j med</source>
          (
          <year>2005</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>C.</given-names>
            <surname>Shang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Zhang</surname>
          </string-name>
          , H. Wen,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <surname>Understanding Multimodal Deep Neural Networks: A Concept Selection View</surname>
          </string-name>
          ,
          <year>2024</year>
          . URL: http://arxiv.org/abs/2404.08964. doi:
          <volume>10</volume>
          .48550/arXiv.2404.08964, arXiv:
          <fpage>2404</fpage>
          .08964 [cs].
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>C.</given-names>
            <surname>Rudin</surname>
          </string-name>
          ,
          <article-title>Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead</article-title>
          ,
          <source>Nature Machine Intelligence</source>
          <volume>1</volume>
          (
          <year>2019</year>
          )
          <fpage>206</fpage>
          -
          <lpage>215</lpage>
          . URL: https: //www.nature.com/articles/s42256-019-0048-x. doi:
          <volume>10</volume>
          .1038/s42256-019-0048-x.
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>R. O.</given-names>
            <surname>Weber</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. J.</given-names>
            <surname>Johs</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Goel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. M.</given-names>
            <surname>Silva</surname>
          </string-name>
          , XAI is in trouble,
          <source>AI</source>
          Magazine
          <volume>45</volume>
          (
          <year>2024</year>
          )
          <fpage>300</fpage>
          -
          <lpage>316</lpage>
          . URL: https://onlinelibrary.wiley.com/doi/10.1002/aaai.12184. doi:
          <volume>10</volume>
          .1002/aaai.12184.
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>C. O.</given-names>
            <surname>Retzlaf</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Angerschmid</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Saranti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Schneeberger</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Röttger</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Müller</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Holzinger</surname>
          </string-name>
          ,
          <article-title>Post-hoc vs ante-hoc explanations: xAI design guidelines for data scientists</article-title>
          ,
          <source>Cognitive Systems Research</source>
          <volume>86</volume>
          (
          <year>2024</year>
          )
          <article-title>101243</article-title>
          . URL: https://linkinghub.elsevier.com/retrieve/pii/S1389041724000378. doi:
          <volume>10</volume>
          .1016/j.cogsys.
          <year>2024</year>
          .
          <volume>101243</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>N. A.</given-names>
            <surname>Wondimu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Buche</surname>
          </string-name>
          , U. Visser,
          <source>Interactive Machine Learning: A State of the Art Review</source>
          ,
          <year>2022</year>
          . URL: http://arxiv.org/abs/2207.06196. doi:
          <volume>10</volume>
          .48550/arXiv.2207.06196, arXiv:
          <fpage>2207</fpage>
          .06196 [cs].
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>J.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <article-title>Is Artificial Intelligence Better Than Human Clinicians in Predicting Patient Outcomes?</article-title>
          ,
          <source>Journal of Medical Internet Research</source>
          <volume>22</volume>
          (
          <year>2020</year>
          )
          <article-title>e19918</article-title>
          . URL: http://www.jmir.org/
          <year>2020</year>
          /8/e19918/. doi:
          <volume>10</volume>
          .2196/19918.
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>G.</given-names>
            <surname>Desolda</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Dimauro</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Esposito</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Lanzilotti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Matera</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Zancanaro</surname>
          </string-name>
          ,
          <article-title>A Human-AI interaction paradigm and its application to rhinocytology</article-title>
          ,
          <source>Artificial Intelligence in Medicine</source>
          <volume>155</volume>
          (
          <year>2024</year>
          )
          <article-title>102933</article-title>
          . URL: https://linkinghub.elsevier.com/retrieve/pii/S0933365724001751. doi:
          <volume>10</volume>
          . 1016/j.artmed.
          <year>2024</year>
          .
          <volume>102933</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>R. J.</given-names>
            <surname>Neuwirth</surname>
          </string-name>
          ,
          <article-title>Prohibited artificial intelligence practices in the proposed EU artificial intelligence act (AIA)</article-title>
          ,
          <source>Computer Law &amp; Security Review</source>
          <volume>48</volume>
          (
          <year>2023</year>
          )
          <article-title>105798</article-title>
          . URL: https://linkinghub.elsevier. com/retrieve/pii/S0267364923000092. doi:
          <volume>10</volume>
          .1016/j.clsr.
          <year>2023</year>
          .
          <volume>105798</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>B.</given-names>
            <surname>Gyevnar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Ferguson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Schafer</surname>
          </string-name>
          ,
          <article-title>Bridging the Transparency Gap: What Can Explainable AI Learn from the AI Act?</article-title>
          , in: K.
          <string-name>
            <surname>Gal</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Nowé</surname>
            ,
            <given-names>G. J.</given-names>
          </string-name>
          <string-name>
            <surname>Nalepa</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Fairstein</surname>
          </string-name>
          , R. Rădulescu (Eds.),
          <source>Frontiers in Artificial Intelligence and Applications</source>
          , IOS Press,
          <year>2023</year>
          . URL: https://ebooks.iospress.nl/doi/10. 3233/FAIA230367. doi:
          <volume>10</volume>
          .3233/FAIA230367.
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>C.</given-names>
            <surname>Giachino</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Cepel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Truant</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Bargoni</surname>
          </string-name>
          ,
          <article-title>Artificial intelligence-driven decision making and firm performance: a quantitative approach, Management Decision (</article-title>
          <year>2024</year>
          ). URL: https://www.emerald. com/insight/content/doi/10.1108/MD-10
          <string-name>
            <surname>-</surname>
          </string-name>
          2023-1966/full/html. doi:
          <volume>10</volume>
          .1108/MD-10
          <string-name>
            <surname>-</surname>
          </string-name>
          2023-
          <year>1966</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <given-names>D.</given-names>
            <surname>Niraula</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K. C.</given-names>
            <surname>Cuneo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I. D.</given-names>
            <surname>Dinov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B. D.</given-names>
            <surname>Gonzalez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. B.</given-names>
            <surname>Jamaluddin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. J.</given-names>
            <surname>Jin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Luo</surname>
          </string-name>
          ,
          <string-name>
            <surname>M. M. Matuszak</surname>
            ,
            <given-names>R. K.</given-names>
          </string-name>
          <string-name>
            <surname>Ten Haken</surname>
            ,
            <given-names>A. K.</given-names>
          </string-name>
          <string-name>
            <surname>Bryant</surname>
            ,
            <given-names>T. J.</given-names>
          </string-name>
          <string-name>
            <surname>Dilling</surname>
            ,
            <given-names>M. P.</given-names>
          </string-name>
          <string-name>
            <surname>Dykstra</surname>
            ,
            <given-names>J. M.</given-names>
          </string-name>
          <string-name>
            <surname>Frakes</surname>
            ,
            <given-names>C. L.</given-names>
          </string-name>
          <string-name>
            <surname>Liveringhouse</surname>
            ,
            <given-names>S. R.</given-names>
          </string-name>
          <string-name>
            <surname>Miller</surname>
            ,
            <given-names>M. N.</given-names>
          </string-name>
          <string-name>
            <surname>Mills</surname>
            ,
            <given-names>R. F.</given-names>
          </string-name>
          <string-name>
            <surname>Palm</surname>
            ,
            <given-names>S. N.</given-names>
          </string-name>
          <string-name>
            <surname>Regan</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Rishi</surname>
            ,
            <given-names>J. F.</given-names>
          </string-name>
          <string-name>
            <surname>Torres-Roca</surname>
          </string-name>
          , H.
          <string-name>
            <surname>-H. M. Yu</surname>
            ,
            <given-names>I. El</given-names>
          </string-name>
          <string-name>
            <surname>Naqa</surname>
          </string-name>
          ,
          <article-title>Intricacies of human-AI interaction in dynamic decision-making for precision oncology</article-title>
          ,
          <source>Nature Communications</source>
          <volume>16</volume>
          (
          <year>2025</year>
          )
          <article-title>1138</article-title>
          . URL: https://www.nature.com/articles/s41467-024-55259-x. doi:
          <volume>10</volume>
          .1038/s41467-024-55259-x.
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <given-names>L. D.</given-names>
            <surname>Urquhart</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>McGarry</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Crabtree</surname>
          </string-name>
          ,
          <article-title>Legal Provocations for HCI in the Design and Development of Trustworthy Autonomous Systems</article-title>
          , in: Nordic Human-Computer Interaction Conference, ACM,
          <string-name>
            <surname>Aarhus</surname>
            <given-names>Denmark</given-names>
          </string-name>
          ,
          <year>2022</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>12</lpage>
          . URL: https://dl.acm.org/doi/10.1145/3546155.3546690. doi:
          <volume>10</volume>
          .1145/3546155.3546690.
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [25]
          <string-name>
            <given-names>D. G.</given-names>
            <surname>Widder</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Dabbish</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. D.</given-names>
            <surname>Herbsleb</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Holloway</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Davidof</surname>
          </string-name>
          ,
          <article-title>Trust in Collaborative Automation in High Stakes Software Engineering Work: A Case Study at NASA</article-title>
          ,
          <source>in: Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems</source>
          , ACM,
          <string-name>
            <surname>Yokohama</surname>
            <given-names>Japan</given-names>
          </string-name>
          ,
          <year>2021</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>13</lpage>
          . URL: https://dl.acm.org/doi/10.1145/3411764.3445650. doi:
          <volume>10</volume>
          .1145/3411764.3445650.
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [26]
          <string-name>
            <given-names>M.</given-names>
            <surname>Calvano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Curci</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Desolda</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Esposito</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Lanzilotti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Piccinno</surname>
          </string-name>
          ,
          <string-name>
            <surname>Building Symbiotic</surname>
            <given-names>AI</given-names>
          </string-name>
          :
          <article-title>Reviewing the AI Act for a Human-Centred,</article-title>
          <string-name>
            <surname>Principle-Based</surname>
            <given-names>Framework</given-names>
          </string-name>
          ,
          <year>2025</year>
          . URL: http: //arxiv.org/abs/2501.08046. doi:
          <volume>10</volume>
          .48550/arXiv.2501.08046, arXiv:
          <fpage>2501</fpage>
          .08046 [cs].
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          [27]
          <string-name>
            <given-names>T.</given-names>
            <surname>Kaufmann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Weng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Bengs</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Hüllermeier</surname>
          </string-name>
          ,
          <source>A Survey of Reinforcement Learning from Human Feedback</source>
          ,
          <year>2023</year>
          . URL: https://arxiv.org/abs/2312.14925. doi:
          <volume>10</volume>
          .48550/ARXIV.2312.14925.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>