<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>do doctors perceive AI in their medical practice?</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Laura Sartori</string-name>
          <email>l.sartori@unibo.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Chiara Binelli</string-name>
          <email>chiara.binelli@unibo.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Francesca Lizzi</string-name>
          <email>francesca.lizzi@pi.infn.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Andrea Chincarini</string-name>
          <email>andrea.chincarini@ge.infn.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Francesco Sensi</string-name>
          <email>francesco.sensi@ge.infn.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Alessandra Retico</string-name>
          <email>alessandra.retico@pi.infn.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>AI</institution>
          ,
          <addr-line>Responsibility, AI reliability and robustness, Human cognition-aware AI, Collaborative AI, Explainable AI</addr-line>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>University of Bologna</institution>
          ,
          <addr-line>Dipartimento di Scienze Politiche e Sociali</addr-line>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2026</year>
      </pub-date>
      <abstract>
        <p>Although artificial intelligence (AI) has proven to be valuable in various sensitive domains such as healthcare, its application in clinical practice is still not ubiquitous. We conducted an online survey involving clinicians and medical physicists from Italian professional medical associations to investigate the drives and the obstacles to the use of AI in medical practice. Increased eficiency is the doctors' main drive to use AI systems, while the main obstacle is its availability in clinical facilities. Caution is recommended by respondents for both AI tools developers and users. Participatory AI, barriers to AI adoption Healthcare has always been an area open to technological innovation to which the introduction of AI is no exception. For several decades, there has been talk of telemedicine and individualized therapy, and the use of AI for personalised medicine, where diagnoses and treatments are efectively tailored to individual patients, seems to be within reach. Among others, the AI potential relates to assistance in treatment planning and personalized care, acceleration of drugs and vaccines' discovery, accuracy of diagnoses, early disease detection, support to medical research and data analysis, reduction of doctors' time spent on routine tasks, and improvements in the doctor-patient relationships. However, little attention has been given to the changes in the medical practice. Adoption by doctors remains low due to algorithmic, legal, social and institutional barriers (Goldfarb et al., 2022 [1]).</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>CEUR
Workshop</p>
      <p>ISSN1613-0073
quality or characteristic of an entity and its performance, which provides suficient reason to justify the
attitude of trust) (Starke et al. 2022 [4], Sartori et al., 2025 [5]). Moreover, a general positive attitude
towards AI does not automatically translate into willingness to adopt AI (Schulz et al. 2023 [6]), but it is
mediated by personal socio-technical imaginaries (Sartori and Bocca 2022 [7]), organizational settings
and institutional practices (Elish and Watkins 2020 [8]).</p>
      <p>Institutional barriers refer to management and technology availability within the organization.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Purpose</title>
    </sec>
    <sec id="sec-3">
      <title>3. Methods</title>
      <p>Given the potential significant barriers to the use of AI in medical practice, in this paper we investigate
the important research question on the actual adoption of AI systems in the daily practice of medical
doctors. We study this question in the context of Italy using an original survey instrument to investigate
the diferent types of barriers that could prevent the adoption of AI in healthcare. The focus of our
study are the social, organizational and institutional barriers.</p>
      <p>We built the questionnaire in Qualtrics (https://www.qualtrics.com/) and administered the survey
between January and November 2024 to a sample of medical doctors working in Italian hospitals. More
than 300 doctors replied to the survey, but only 193 completed all mandatory questions. The sample
includes 57 Medical Physicists, 39 Radiologists, 15 Nuclear Medicine specialists, 20 Neurologists, 4
Oncologists, 4 Radiotherapists, and 54 Doctors with a diferent specialization.</p>
      <p>The questionnaire is organized in five sections: 1. Perceptions, attitudes and awareness of AI; 2.
Socio-technical imaginaries of AI; 3. Adoption of AI tools by the hospitals; 4. AI use in clinical practice
by the medical doctors; 5. AI’s potential adoption in clinical practice. We targeted the communities of
doctors (Radiologists and Neuroradiologists, Nuclear Medicine doctors, Neurologists, Radiotherapists
and Radiation oncologists, and Medical Physicists) through their National Professional Associations,
which helped sponsoring the survey.</p>
      <p>At the beginning of the questionnaire, before Section 1, we provided respondents with the following
definition of Artificial Intelligence: “Artificial Intelligence (AI) refers to computer systems that are able
to perform specific tasks and make decisions without being provided with explicit instructions and
that, through a process of learning from data, are able to make predictions independently from human
intervention”. Providing this general definition of AI ensures comparability of responses across survey’s
respondents independently of the diferent topics and specific AI instruments they may be using.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Results</title>
      <sec id="sec-4-1">
        <title>4.1. AI adoption</title>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. Tasks for which AI is employed</title>
        <p>It is clear from the answers provided to the question “In which cases have you used an AI tool?” that our
sample population is biased towards imaging-related professions: the vast majority of the respondents
Factors determining AI adoption at individual level</p>
        <sec id="sec-4-2-1">
          <title>Question "Why did you start using AI systems?"</title>
        </sec>
        <sec id="sec-4-2-2">
          <title>I can save time</title>
        </sec>
        <sec id="sec-4-2-3">
          <title>I can get a second opinion, independent and automatical</title>
        </sec>
        <sec id="sec-4-2-4">
          <title>I can get repeatable, comparable and publishable results faster</title>
          <p>I can reduce the number of slow or repetitive tasks in my day work
s
e
c
i
o
h
C
e
l
ibPhysicians and medical physicists who use AI will in future replace those who do not
s
s
o
P</p>
        </sec>
        <sec id="sec-4-2-5">
          <title>I can integrate my training with it</title>
        </sec>
        <sec id="sec-4-2-6">
          <title>I trust AI technology for its accuracy</title>
        </sec>
        <sec id="sec-4-2-7">
          <title>I feel AI has been forced on me</title>
        </sec>
        <sec id="sec-4-2-8">
          <title>Doctors who do not start using AI will in future be replaced by AI systems</title>
        </sec>
        <sec id="sec-4-2-9">
          <title>I can delegate some decision-making responsibilities</title>
          <p>It is not available in the facility where I work</p>
          <p>I have not received sufficient training</p>
          <p>AI is not mentioned in any official guidelines</p>
          <p>I do not trust a technology whose decisions I cannot understand</p>
          <p>It is not necessary to introduce variations in established medical practices</p>
          <p>50
Answers [%]
60
chose “Image post-processing” or “Assistance in image interpretation” (Fig. 3).</p>
          <p>Doctors believe that the added value of data-driven tools lies where the largest body of data is:
support to decision-making and support to diagnosis (Fig. 4).</p>
        </sec>
      </sec>
      <sec id="sec-4-3">
        <title>4.3. Who is responsible?</title>
        <p>Responsibility of AI use is a debated theme: in our survey, we ask a question on who is responsible for
AI errors and no clear consensus emerges among the respondents about the responsibility of mis-use or
mis-functioning of AI. The majority of respondents think that clinicians have the main responsibility</p>
        <p>Use cases for AI at individual level</p>
        <p>Question "For which use cases have you used an AI tool?"
0
10
20
30
40
60
70
80
90</p>
        <p>100
50
Answers [%]
since Figure 5 shows that over 35% of respondents state that the responsibility of AI errors is of clinicians
in charge.</p>
      </sec>
      <sec id="sec-4-4">
        <title>4.4. Perceived risks of the use of AI</title>
        <p>When asked about major perceived risks about AI use in their medical routine, doctors’ opinions cluster
around three main types of risk:
• Specialists’ skills and competences cannot be equated by AI (answer “AI might not be flexible
Patients who consented to the treatment of their data with AI
AI is developed without any cognizance of clinical practice</p>
        <p>AI might not be flexible
AI might induce to delegate respons.</p>
        <p>AI has no moral duty
AI results are not immediately interpretable</p>
        <p>AI might sever clinician-patient trust</p>
        <p>AI errors might cause legal repercussions
AI cannot compete with knowhow of expert personnel</p>
        <p>AI might reduce the need of clinicians
0
10
20
30
40
60
70
80
90</p>
        <p>100
50
Answers [%]</p>
        <p>enough to cope with unknown data”; “AI cannot compete with known-how of expert personnel”)
• Doctors urge for a participatory design approach (“AI is developed without any cognizance of
clinical practice”; “AI results are not immediately interpretable”)
• Doctors fear the fact that AI cannot be considered as a moral agent (“AI has no moral duty”; “AI
might sever clinician-patient trust”).
50
Answers [%]</p>
      </sec>
      <sec id="sec-4-5">
        <title>4.5. Adoption of AI in medical facilities</title>
        <p>The main perceived barriers to adopting AI tools by hospitals and clinics are their economic resources
and the attitude of their management. Large centres led by innovation-driven management tend to
favour AI introduction, while in small and less wealthy centres AI faces more dificulties in fitting in
(Fig. 7 and 8)</p>
        <p>Performing automated tasks (image acquisition, registration, and/or segmentation)</p>
        <p>Robotics, intervention, or screening systems for surgery</p>
        <p>Use cases for AI at institution level</p>
        <p>Question "If in possession of AI tools, what the your institution uses these tools for?"
Support to image reading
Medical files management
Support to decision making
Increase workflow efficiency</p>
        <p>Remote therapy control
Personalinzed therapy
0
20
40</p>
        <p>60
Answers [%]
80
100</p>
      </sec>
      <sec id="sec-4-6">
        <title>4.6. The scope of using of AI</title>
        <p>Doctors are asked “If your institution possesses AI tools, what are they used for?”. Due to the prevalence
of radiologists in our sample population, answers relating to imaging-related tasks (e.g. “Support in
imaging reading” and “Performing automated tasks”) are the most common (Fig. 9).
4.7. Reactions to unexpected AI predictions
AI tools are often used to support decision making. This raises an important question: how do medical
professionals respond when AI-generated results contradict their pre-existing opinions? To explore this,
the survey asked: “What is your reaction when confronted with unexpected results from AI systems?”.
Respondents could choose one of the following options: “I consider discrepancies to be ignored, as
I believe in the superiority of humans over the automated system”; “I consider discrepancies to be
ignored because, as I believe that automated systems are always prone to faulty the design”; “I consider
discrepancies to be ignored because, as I believe in the superiority of automatic systems over humans in
certain situations”; “I consider discrepancies to be taken into account, as I believe in complementarity
between humans and automatic systems”.</p>
        <p>Results (Fig 10) show that the vast majority of doctors are inclined to actively investigate the reasons
behind the conflicting output and decide whether to revise their opinions accordingly or to disregard
the AI output. This tendency seems to be shared by doctors no matter the level of familiarity with AI in
their work environment.</p>
      </sec>
      <sec id="sec-4-7">
        <title>4.8. Relevance of AI explainability</title>
        <p>The term “Explainable AI” (xAI) refers to the techniques that aim to make AI tools more transparent to
users, providing them insights on their decision process. This is of utmost importance in the medical
environment where the final users have to make critical decisions based on the output of automated
tools and shall thus be made aware of the internal decision-making pipeline. Explainability is also said
to be a way to increase trust in AI tools (Markus et al., 2021 [9]).</p>
        <p>With an eye to software development, we asked doctors which method of explainability would be
the most efective. The optional question “Assuming we have an AI algorithm capable of providing an
explanation about the decision made, this should consist of..” let users choose among the traditional</p>
        <p>Clinicians reactions upon discrepancies in AI output</p>
        <p>Question "What is your reaction when faced with results that differ from those expected?"
0
10
20
30
40
60
70
80
90</p>
        <p>100
Visualization</p>
        <p>Text
Examples
xAI approaches: “Text: AI explains in natural language why it made that decision”, “Visualization: AI
shows the variables/regions of the image that the algorithm considered most important in its decision”
and “Examples: AI explains the decision by presenting similar cases and/or counter-examples”. Almost
all clinicians selected more than one answer. A slight preference is expressed for the visual approach
to xAI (Fig. 11). This might be due to imaging-based bias in survey population, where, additionally,
by-example approaches cannot always be applicable.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Discussion</title>
      <p>Responses to our survey indicate that doctors would like to see a paradigm shift in AI development
with medical experts favouring a participative approach, which includes being involved by software
developers at the stage of requirements’ specification, algorithms development, results presentation
and validation.</p>
      <p>The development process could involve human decision-makers in one or both of the following two
steps: data preparation for AI training and outputs’ interpretation. Support to data preparation for
training AI models can be provided by selecting, sorting or interactively and iteratively annotating data
(“active learning”), while correcting or ranking model predictions can be of help for the enhancement
of outputs’ interpretation (Budd et al., 2021 [10]).</p>
      <p>In the most relevant use case for our sample population (i.e., using AI to support imaging data
interpretation or manipulation), the final judgment of the AI user will also be based on other factors
that are dificult to measure, such as the current state of the patient. Therefore, the most important goal
of AI can be to provide “recommendations” that inform the decision-making process.</p>
    </sec>
    <sec id="sec-6">
      <title>6. Conclusions</title>
      <p>The results of the survey show that several obstacles remain to the efective use of AI in clinical practice:
for institutions (i.e., financial resources), for developers (i.e., transparency of software, dataset bias), for
lawmakers (i.e., regulation on data, misuse responsibility).</p>
      <p>According to medical specialists saving time is crucial, but eficiency cannot come at the expense of
accuracy. Above all, results must be reliable, and rapid production is a secondary consideration.</p>
      <p>The survey responses indicate a demand for a “participatory approach” to the development and use of
AI in clinical practice: doctors will unlikely trust “blackbox” AI tools; rather, they expect to be involved
in the development and validation of AI tools in order to ensure that these tools are actually useful in
their work environment.</p>
    </sec>
    <sec id="sec-7">
      <title>Acknowledgments</title>
      <p>We acknowledge funding from the project PNRR M4C2 Inv. 1.3, PE00000013 - “FAIR - Future Artificial
Intelligence Research - Spoke 8 Pervasive AI”, NextGeneration EU as the framework where this work
has been carried out. We also acknowledge the crucial help of the medical associations that helped
spreading the survey among their subscribers: for radiologists: “Società Italiana di Radiologia Medica e
Interventistica” (SIRM), “Associazione Italiana risonanza Magnetica in Medicina” (AIRMM), “Società
Italiana Ultrasonologia in Medicina e Biologia” (SIUMB); for nuclear medicine: “Associazione Italiana
Medicina Nucleare” (AIMN); for neurologists: “Società Italiana Neurologia” (SIN), “Associazione Italiana
di Neuroradiologia Diagnostica e Radiologia” (AINR), “Associazione Neurologica Italiana per la ricerca
nelle Cefalee (ANIRCEF), “ Società Italiana per lo Studio delle Cefalee” (SISC), for radiotherapists and
radiation oncologists: “Associazione Italiana di Radioterapia Oncologica” (AIRO); for medical physicists:
“Associazione Italiana Fisica Medica” (AIFM), and other multidisciplinary associations: “Società Italiana
Intelligenza Artificiale in Medicina” (SIIAM) and “Alleanza contro il cancro” (ACC).</p>
    </sec>
    <sec id="sec-8">
      <title>Declaration on Generative AI</title>
      <p>The authors have not employed any Generative AI tools.
[2] Z. Obermeyer, B. Powers, C. Vogeli, S. Mullainathan, Intentional machines: A defence of trust in
medical artificial intelligence, Bioethics 366 (2022) 447–453. doi: 10.1126/science.aax2342.
[3] A. Rockall, A. P. Brady, L. E. Derchi, The identity and role of the radiologist in 2020: A survey
among european society of radiology full radiologist members, Insights into Imaging 11 (2020)
130. doi:10.1186/s13244-020-00945-9.
[4] G. Starke, R. vanDenBrule, Intentional machines: A defence of trust in medical artificial intelligence,</p>
      <p>Bioethics 36 (2022) 154–161. doi:10.1111/bioe.12891.
[5] L. Sartori, S. Cannizzaro, M. Musmeci, C. Binelli, When the white coat meets the code: medical
professionals and artificial intelligence (ai) in italy negotiating with trust and boundary work,
Health, Risk &amp; Society 27 (2025) 7–8.
[6] P. J. Schulz, M. O. Lwin, Modeling the influence of attitudes, trust, and beliefs on endoscopists’
acceptance of artificial intelligence applications in medical practice, Frontiers in Public Health
(2023). doi:10.3389/fpubh.2023.1301563.
[7] L. Sartori, G. Bocca, Minding the gap(s): public perceptions of ai and socio-technical imaginaries,</p>
      <p>AI &amp; SOCIETY (2022) 1–16. doi:10.1007/s00146-022-01422-1.
[8] M. C. Elish, E. A. Watkins, Repairing innovation: A study of integrating ai in clinical care (2020).
[9] A. F. Markus, J. A. Kors, P. R. Rijnbeek, The role of explainability in creating trustworthy artificial
intelligence for health care: A comprehensive survey of the terminology, design choices, and
evaluation strategies, Journal of Biomedical Informatics 113 (2021). doi:10.1016/j.jbi.2020.
103655.
[10] S. Budd, E. C. Robinson, B. Kainz, A survey on active learning and human-in-the-loop deep
learning for medical image analysis, Medical Image Analysis 71 (2021). doi:10.1016/j.media.
2021.102062.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>A.</given-names>
            <surname>Goldfarb</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Teodoridis</surname>
          </string-name>
          ,
          <article-title>Why is ai adoption in healthcare lagging?</article-title>
          , Brookings series,
          <source>The Economics and Regulation of Artificial Intelligence and Emerging Technologies</source>
          (
          <year>2022</year>
          ).
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>